Invited Talks for AdvML'19

Le Song (Georgia Institute of Technology)

Adversarial Machine Learning for Graph Neural Networks

Graph neural networks have shown exciting results in a diverse range of applications such as social networks analysis, drug discovery and knowledge graph reasoning. However, few attentions have been paid to the robustness of such models, in contrast to numerous research work for image or text adversarial attack and defense. In this talk, I will focus on the adversarial attacks that fool graph neural network models by modifying the combinatorial structure of the graph. I will explain a reinforcement learning based attack method that learns a generalizable attack policy, while only requiring prediction labels from the target classifier. Also, variants of genetic algorithms and gradient methods are presented in the scenario where prediction confidence or gradients are available. I will use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. I will also show such attacks can be used to diagnose the learned classifiers.


Yuan (Alan) Qi (Ant Financial)

Privacy Preserving Machine Learning for Inclusive Finance

Data privacy and security are drawing much attention in recent years. A critical challenge is how to bridge isolated data islands to build better machine learning systems while protecting data privacy and meeting regulatory compliance requirements. In this talk, I will present some of recent progress we have been made to address this challenge at Ant Financial with applications to inclusive finance.


Xiaojin (Jerry) Zhu (University of Wisconsin-Madison)

Adversarial Machine Learning in Sequential Decision Making

Most of adversarial machine learning focuses on supervised learning. Supervised learning, however, does not cover all use cases of machine learning and consequently the vulnerabilities. In this talk we discuss some adversarial vulnerabilities in sequential decision making, specifically attacks on multi-armed bandits, online gradient descent, and autoregressive models. We show that control theory and reinforcement learning are natural frameworks to model the attacker in these sequential settings.

<slides>