Workshop of Adversarial Machine Learning towards Advanced Vision Systems (AMLAVS)

co-located at The 16th Asian Conference on Computer Vision (ACCV2022)

Overview

Nominated computer vision tasks have nowadays performed on super-human level with the advances of deep learning techniques. However, adversarial machine learning researchers demonstrate that such vision systems could not yet be as robust as human systems. Acting as a new gamut of technologies, adversarial machine learning covers the research and development of studying the intact capabilities and malicious behaviors of machine learning models in an adversarial scenario. The potential vulnerabilities of ML models to malicious attacks can result in severe consequences for safety-critical systems. One most known manner is via imperceptible perturbations to the input images/videos. Although it is not to be alarmist, researchers in machine learning and computer vision areas have a responsibility to preempt attacks and build safeguards especially when the task is critical for information security, and human lives (e.g., autonomous driving systems). We need to deepen our understanding of machine learning in adversarial environments.

Thus motivated, we organize the ACCV 2022 workshop on “Adversarial Machine Learning towards Advanced Vision Systems” in Dec 2022, in conjunction with Asia Conferences on Computer Vision (ACCV 2022) in Macau, China. We will invite world-renowned keynote speakers to share their latest research progress in this direction at the workshop. Besides, we will invite researchers to submit their latest research output to this workshop and present them in a poster session.

Keynote Speakers

Dr. Pin-Yu Chen: Dr. Pin-Yu Chen is a principal research scientist at IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA. He is also the chief scientist of RPI-IBM AI Research Collaboration and PI of ongoing MIT-IBM Watson AI Lab projects. Dr. Chen received his Ph.D. degree in electrical engineering and computer science from the University of Michigan, Ann Arbor, USA, in 2016. Dr. Chen’s recent research focuses on adversarial machine learning and robustness of neural networks. His long-term research vision is to build trustworthy machine learning systems. At IBM Research, he received several research accomplishment awards, including an IBM Master Inventor and IBM Corporate Technical Award in 2021. His research works contribute to IBM open-source libraries including Adversarial Robustness Toolbox (ART 360) and AI Explainability 360 (AIX 360). He has published more than 50 papers related to trustworthy machine learning at major AI and machine learning conferences, given tutorials at NeurIPS’22, AAAI’22, IJCAI’21, CVPR(’20,’21), ECCV’20, ICASSP’20, KDD’19, and Big Data’18, and organized several workshops for adversarial machine learning. He received the IEEE GLOBECOM 2010 GOLD Best Paper Award and UAI 2022 Best Paper Runner-Up Award.


A/Prof. Dongxiao Zhu: A/Prof. Dongxiao Zhu is currently an Associate Professor of the Department of Computer Science at Wayne State University. He earned PhD from University of Michigan, Ann Arbor in 2006 and his current research interest lies in Trustworthy Machine Learning and Applications in Social, Health, and Urban computing with focus on explainability, adversarial robustness, and fairness. Dr. Zhu is the founding director of Wayne AI Research Initiative (http://ai.wayne.edu/), the Director of Trustworthy AI research lab (https://dongxiaozhu.github.io/), and Director of Computer Science and AI graduate programs at Wayne State University. Dr. Zhu is a recipient of College of Engineering Research Excellence Award (2022). In addition to foundational AI research, Dr. Zhu is passionate about leveraging AI for social good; he develops tailor-made AI algorithms for promoting research in life, physical and social science domains.

Program

Dec. 4th, 2022 (Beijing 9:00 - 12:00, GMT+8)

Session One (Beijing time 9:00 - 10:25):

9:00 - 9:45 Keynote Talk (Dr. Pin-Yu Chen, IBM Thomas J. Watson Research Center, USA)

Talk Title: Reprogramming Foundation Models with Limited Resources

Talk abstract: TBA

9:45 - 10:25 Keynote Talk (A/Prof. Dongxiao Zhu, Wayne State University, USA)

Talk Title: Empowering Explainable Machine Learning through the Lens of Adversarial Robustness and Fairness

Talk abstract: Deep Neural Networks (DNNs) are complex nonlinear functions parameterized by weights to map inputs to outputs. It has attracted much attention in machine learning community due to its state-of-the-art performance on various tasks. Despite the success, explainability of a complex DNN remain an open problem, hindering its wide deployment in safety and security-critical domains. In this talk, I will introduce our recent works on ensuring DNN explainability from the lens of adversarial robustness and fairness using both image/text classification and recommender systems. Regarding adversarial robustness, I will describe Adversarial Gradient Integration for explaining convolutional neural network (CNN) based image classification and detection, which utilizes adversarial examples for better estimating the input attributions (IJCAI-21). To incorporate the unique self-attention mechanism from Transformers in rendering explanations, I will continue with Attentive Class-Activation Tokens for explaining Transformer based text classification (NeurIPS-22). Regarding fairness, I will describe Counterfactual Interpolation Augmentation to enhance the explainability of CNN via fairness-aware data augmentation (IJCAI-22). I will conclude this talk with samples of other trustworthy AI systems developed in my research group, including an interpretable recommender system (IJCAI-20), an adversarial robust image classification system (AAAI-23, AAAI-21), and a driver-centered and resource-aware Electric Vehicle (EV) Charging recommender (ECML-22).

Session two (Beijing time 10:40 - 11:55):

10:40 - 11:05 Lili Zhang, Xiaodong Wang, ADVFilter: Adversarial Example Generated by Perturbing Optical Path

11:05 - 11:30 Qingguo Zhou, Ming Lei, Peng Zhi, Rui Zhao, Jun Shen and Binbin Yong, Towards Improving the Anti-attack Capability of the RangeNet++

11:30 - 11:55 Yanli Li, Abubakar Sadiq Sani, Dong Yuan and Wei Bao, Enhancing Federated Learning Robustness Through Non-IID Features

Call for papers

We welcome submission from different aspects of adversarial ML for computer vision systems, including but not limited to

• Adversarial/poisoned attacks against computer vision tasks

• adversarial defenses to improve computer vision system robustness

• Methods of detecting/rejecting adversarial examples in computer vision tasks

• Benchmarks to reliably evaluate defenses strategies

• Theoretical understanding of adversarial ML in computer vision systems

• Adversarial ML in the real world

• Reflective applications/demos of adversarial ML for computer vision tasks

Submission

The proceedings will be published by Springer in the Lecture Notes in Computer Science (LNCS) series following ACCV proceedings (previous ACCV proceedings can be found here).

Top quality papers will be invited to submit to a special issue in several top Q1 journals (stay tuned and we will release the information shortly).

Easychair submission:

https://easychair.org/conferences/?conf=amlavs22

Camera-ready instruction:

https://docs.google.com/document/d/17VvlHYj0hU9Xlg6vXgeHQgqN_-r69dcv/edit?usp=sharing&ouid=104746812988149595957&rtpof=true&sd=true

Important Dates


Paper submission: September 11, 2022 (11:59PM Pacific Time, extended)

Author notification: September 30, 2022 (11:59PM Pacific Time)

Camera-ready paper due: October 12, 2022 (11:59PM Pacific Time)

Organizer

Dr. Minhui (Jason) Xue, Data61, CSIRO, Australia

Dr. Huaming Chen, The University of Sydney, Australia

Program Committee:

Dr. Qingrong Chen, University of Illinois Urbana-Champaign, US

Prof. Chi-Hung Chi, Nanyang Technological University, Singapore

Dr. Tong He, The University of Adelaide, Australia

Dr. Daqi Liu, The University of Adelaide, Australia

Dr. Feng Liu, The University of Melbourne, Australia

Dr. Ruoxi Sun, Data61, CSIRO, Australia

Dr. Xiaoyu Xia, The University of Southern Queensland, Australia

Dr. Dong Yuan, The University of Sydney, Australia

Dr. Xuyun Zhang, Macquarie University, Australia