Invited Talks for AdvML'22

Soheil Feizi (University of Maryland)

A Conjecture on Optimal Robustness against Poisoning Attacks via Few-shot Learning

Abstract: Data poisoning considers an adversary that distorts the training set of machine learning algorithms for malicious purposes. In this talk, I will discuss fundamentals of provable robustness against data poisoning. In particular, I’ll present a conjecture relating the asymptotic robustness optimality against data poisoning to few-shot learning problems. I’ll provide theoretical results verifying this conjecture in multiple cases. I’ll also show that this conjecture implies that aggregation-based defenses are (asymptotically) optimal---if we have the most data-efficient learner, we can turn it into one of the most robust defenses against data poisoning.

SueYeon Chung (New York University)

Neuro-inspired Mechanisms for Adversarial Robustness

Raman Arora (Johns Hopkins University)

Guaranteed adversarially robust training of neural networks

Abstract: Despite the tremendous success of deep learning, neural network-based models are highly susceptible to small, imperceptible, adversarial perturbations of data at test time. Such vulnerability to adversarial examples imposes severe limitations on deploying neural networks-based systems, especially in critical, high-stakes applications such as autonomous driving, where safe and reliable operation is paramount. In this talk, we seek to understand why trained neural networks classify clean data with high accuracy yet remain extraordinarily fragile due to strategically induced perturbations. Further, we give a first-of-its-kind computational guarantee for adversarial training, which formulates robust learning as a min-max optimization problem and has emerged as a principled approach to training models that are robust to adversarial examples.