3rd Workshop on Adversarial Learning Methods for Machine Learning and Data Mining @ KDD 2021 (virtual workshop)
Presentations of Accepted Papers at AdvML'21
Workshop Agenda (virtual event): August 15th, 2021, 4-8 pm (US eastern time)
4-5 pm: Invited talk by Xue (Shelley) Lin: Secure Deep Learning – Adversarial T-shirt, Attack Detection, and Robust Ensemble
5-6 pm: AdvML 2021 Rising Star Awards & Presentations (sponsored by MIT-IBM Watson AI Lab)
6-7 pm: Presentations of Accepted Papers and Live Q&A
7-8 pm: Invited talk by Xiaoming Liu: On the Detection and Reverse Engineering of Diverse Attacks to Faces
MIT-IBM Watson AI Lab best paper award: Vito Walter Anelli (Polytechnic University of Bari); Yashar Deldjoo (Polytechnic University of Bari); Tommaso Di Noia (Politecnico di Bari); Felice Antonio Merra (Politecnico di Bari). Understanding the Effects of Adversarial Personalized Ranking Optimization Method on Recommendation Quality
One Best Paper Awards and Two Rising Star Awards are sponsored by MIT-IBM Watson AI Lab with cash prizes ($500 each)!
Co-located conference: KDD 2021 (virtual conference)
Workshop Date and time: August 15, 4 pm to 8 pm (US east coast time)
Organizers: Pin-Yu Chen (IBM Research), Cho-Jui Hsieh (UCLA), Bo Li (UIUC), SIjia Liu (Michigan State University)
Paper submission Deadline:
May 20th, 2021May 28th, 2021 (final)Notification Date: June 10th, 2021
Submission Site: CMT
Paper submission format: ACM template (sample-sigconf), 4 pages excluding references and supporting materials. The authors can choose to anonymize the author information during submission (but not required to do so)
Call for AdvML Rising Star Award Nominations! (Due June 25th)
Accepted Papers:
Vito Walter Anelli (Polytechnic University of Bari); Yashar Deldjoo (Polytechnic University of Bari); Tommaso Di Noia (Politecnico di Bari); Felice Antonio Merra (Politecnico di Bari). Understanding the Effects of Adversarial Personalized Ranking Optimization Method on Recommendation Quality [MIT IBM Watson AI Lab Best Paper Award]
David Stutz (Max Planck Institute for Informatics); Matthias Hein (University of Tübingen); Bernt Schiele (MPI Informatics). Relating Adversarially Robust Generalization to Flat Minima
David Stutz (Max Planck Institute for Informatics); Nandhini Chandramoorthy (IBM T. J. Watson Research Center); Matthias Hein (University of Tübingen); Bernt Schiele (MPI Informatics). Bit Error Robustness for Energy-Efficient DNN Accelerators
Nimrah Shakeel. Context-Free Word Importance Scores for Attacking Neural Networks
Jacob M Springer (Los Alamos National Laboratory); Bryn Reinstadler (MIT); Una-May O'Reilly (MIT). STRATA: Simple, Gradient-Free Attacks for Models of Code
Clayton B Washington (The Ohio State University); Maximum Wilder-Smith (California State Polytechnic University at Pomona); Tingting Chen (California State Polytechnic University at Pomona); Hao Ji (California State Polytechnic University at Pomona). Robust Localized Physical Attacks on Deep Learning Classifiersfor Objects with Arbitrary Surface
Ankita Shukla (Arizona State University); Pavan Turaga (Arizona State University); Saket Anand (Indraprastha Institute of Information Technology Delhi). Cleaning Adversarial Perturbations with Image-Subspace Projections
Yize Li (Northeastern University); Pu Zhao (Northeastern University); Yao Yuguang (Michigan State University); Vishal Asnani (Michigan State University); Yifan Gong (Northeastern University); Yimeng Zhang (Michigan State University); Zhengang Li (Northeastern University); Xiaoming Liu (Michigan State University); Sijia Liu (Michigan State University); Xue Lin (Northeastern University). Supervised Classification on Deep Neural Network Attack Toolchains
Chenan Wang (Northeastern University); Pu Zhao (Northeastern University); Siyue Wang (Northeastern University); Xue Lin (Northeastern University). Detection and Recovery Against Deep Neural Network Fault Injection Attacks Based on Contrastive Learning
Gihyuk Ko (Carnegie Mellon University); Gyumin Lim (Korea Advanced Institute of Science and Technology). Unsupervised Detection of Adversarial Examples with Model Explanations
Topics of interest include but are not limited to:
Adversarial attacks and defenses in machine learning and data mining
Provably robust machine learning methods and systems
Robustness certification and property verification techniques
Representation learning, knowledge discovery and model generalizability
Generative models and their applications (e.g., generative adversarial nets)
Robust optimization methods and (computational) game theory
Explainable and fair machine learning models via adversarial learning techniques
Transfer learning, multi-agent adaptation, self-paced learning
Privacy and security in machine learning systems
Novel applications and innovations using adversarial machine learning and data mining
Organizers:
Pin-Yu Chen (IBM Research), Cho-Jui Hsieh (UCLA), Bo Li (UIUC), Sijia Liu (Michigan State University)