Adversarial Robustness of Deep Learning Models

Presenter: Pin-Yu Chen (IBM Research)

Conference: ECCV 2020

Date and Time: Aug. 23rd, 2020

Description of the Tutorial

Despite achieving high standard accuracy in a variety of machine learning tasks, deep learning models built upon neural networks have recently been identified having the issue of lacking adversarial robustness. The decision making of well-trained deep learning models can be easily falsified and manipulated, resulting in ever-increasing concerns in safety-critical and security-sensitive applications requiring certified robustness and guaranteed reliability. In recent years, there has been a surge of interest in understanding and strengthening adversarial robustness of an AI model in different phases of its life cycle, including data collection, model training, model deployment (inference) and system-level (software+hardware) vulnerabilities, giving rise to different robustness factors and threat assessment schemes. 

 This tutorial will provide an overview of recent advances in the research of adversarial robustness, featuring both comprehensive research topics and technical depth. We will cover three fundamental pillars in adversarial robustness: attack, defense and verification. Attack refers to efficient generation of adversarial examples or poisoned data samples for robustness assessment under different attack assumptions (e.g., white-box v.s. black-box attacks, prediction-evasion v.s. model stealing). Defense refers to adversary detection and robust training algorithms to enhance model robustness. Verification refers to attack-agnostic metrics and certification algorithms for proper evaluation of adversarial robustness and standardization. For each pillar, we will emphasize the tight connection between computer vision techniques and adversarial robustness, ranging from fundamental techniques such as first-order and zero-order optimization, minimax optimization, geometric analysis, model compression, data filtering and quantization, subspace analysis, active sampling, frequency component analysis to specific applications such as computer vision, automatic speech recognition, natural language processing and data regression. Furthermore, we will also cover new applications originated from adversarial robustness research, such as robust feature representation learning and enhanced model interpretability.

 This tutorial aims to serve as a short lecture for researchers and students to access the emergent field of adversarial robustness from the viewpoint of computer vision community. The contents of this tutorial will provide sufficient background for participants to understand the motivation, research progress, opportunities and ongoing challenges in adversarial robustness, in addition to pointers to open source research libraries. The outline of this tutorial is as follows.

Tutorial Outline

1.     Introduction and Motivation for Studying Adversarial Robustness 

2.     Attack Methods

3.     Defense Methods

4.     Verification Methods

5.     Conclusion and Open Questions

Presenter's Bio: <link>

Dr. Pin-Yu Chen is currently a research staff member at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. degree in electrical engineering and computer science and M.A. degree in Statistics from the University of Michigan, Ann Arbor, USA, in 2016. He received his M.S. degree in communication engineering from National Taiwan University, Taiwan, in 2011 and B.S. degree in electrical engineering and computer science (undergraduate honors program) from National Chiao Tung University, Taiwan, in 2009. 

Dr. Chen’s recent research is on adversarial machine learning and robustness of neural networks. His long-term research vision is building trustworthy machine learning systems. He has published more than 25 papers on trustworthy machine learning at major AI and machine learning conferences and has co-organized workshops on adversarial learning for machine learning and data mining such as KDD’19. His research interest also includes graph and network data analytics and their applications to data mining, machine learning, signal processing, and cyber security. He was the recipient of the Chia-Lun Lo Fellowship from the University of Michigan Ann Arbor. He received the NIPS 2017 Best Reviewer Award, and was also the recipient of the IEEE GLOBECOM 2010 GOLD Best Paper Award. Dr. Chen is currently on the editorial board of PLOS ONE. 

At IBM Research, Dr. Chen has co-invented more than 20 U.S. patents. In 2019, he received two Outstanding Research Accomplishments on research in adversarial robustness and trusted AI, and one Research Accomplishment on research in graph learning and analysis.