Invited Talks for AdvML'21

Northeastern University

Title: Secure Deep Learning – Adversarial T-shirt, Attack Detection, and Robust Ensemble

Abstract: Deep learning techniques have achieved best-in-class performance in many application domains such as autonomous driving, healthcare, and robotics. However, they may be vulnerable under both the test-time and training-time attacks, such as adversarial perturbations, patch attacks, and trojan attacks. This talk introduces the recent works in the adversarial deep learning from our group. In the first part of the talk, I will present our work on Adversarial T-shirts, a robust physical adversarial example for evading person detectors even if it could undergo non-rigid deformation due to a moving person’s pose changes. To the best of our knowledge, this is the first work that models the effect of deformation for designing physical adversarial examples with respect to non-rigid objects such as T-shirts. In the second part, I will talk about our recent DARPA RED project progress in collaboration with the Michigan State University. For this project, our group develops a supervised meta-classifier for the prediction of the attack attributes from a given attack instance. Lastly, I will introduce an on-going work of robust deep learning against multiple perturbations. We propose a model ensemble based defense achieving broad coverage and desirable robustness. Then we explore various model compression schemes to address the large model size issue of ensemble model without compromising accuracy.


Michigan State University

Title: On the Detection and Reverse Engineering of Diverse Attacks to Faces

Abstract: Human face is a common object of interest when studying the vulnerability of machine learning models and computer vision applications. There have been diverse types of attacks to faces, such as adversarial attack, digital manipulations, and physical spoofs. From the perspective of a defender, this talk will introduce our recent efforts on detecting these attack types individually and jointly, as well as reverse engineering the various information regarding the attacking process, including the attacked spatial area, additive attack signal, attack model, etc. We will also describe some extensions of detecting attacks to generic images beyond faces. Bio: Dr. Xiaoming Liu is the MSU Foundation Professor at the Department of Computer Science and Engineering of Michigan State University (MSU). He received Ph.D. degree from Carnegie Mellon University in 2004. Before joining MSU in 2012 he was a research scientist at General Electric (GE) Global Research. He works on computer vision, machine learning, and biometrics, especially on 3D vision, and face related analysis. Since 2012 he helps to develop a strong computer vision area in MSU who is ranked top 15 in US according to the 5-year statistics at csrankings.org. He received the 2018 Withrow Distinguished Scholar Award from MSU. He has been Area Chairs for numerous conferences, including CVPR, ICCV, ECCV, ICLR, NeurIPS, ICML, the Co-Program Chair of BTAS’18, WACV’18, and AVSS’21 conferences, and Co-General Chair of FG’23 conference. He is an Associate Editor of Pattern Recognition Letters, Pattern Recognition, and IEEE Transaction on Image Processing. He has authored more than 150 scientific publications, and has filed 29 U.S. patents. His work has been cited over 13000 times according to Google Scholar, with an H-index of 58. He is a fellow of IAPR. More information of Dr. Liu’s research can be found at http://cvlab.cse.msu.edu