1st Workshop on Adversarial Learning Methods for Machine Learning and Data Mining @ KDD 2019

  • Co-located conference: KDD 2019
  • Workshop Date: August 5th, 2019, 8 am to 12 pm
  • Organizers: Pin-Yu Chen (IBM Research), Cho-Jui Hsieh (UCLA), Bo Li (UIUC), SIjia Liu (IBM Research)
  • Paper submission Deadline: May 14th (extended and final), 2019
  • Notification Date: June 1st, 2019 June 3rd, 2019 (decisions have been sent)
  • Submission Site: EasyChair
  • Paper submission format: ACM template , 4 pages excluding references and supporting materials
  • Best paper award with 500 USD cash prize sponsored by MIT-IBM Watson AI Lab will be given at the workshop!
    • Best paper award: Bo Zhang, Boxiang Dong, Hui Wendy Wang and Hui Xiong. Trust-but-Verify: Result Verification of Federated Deep Learning <paper>
    • Best paper finalist: Jen-Tzung Chien and Chun-Lin Kuo. Bayesian Learning for Generative Adversarial Networks <paper>
    • Best paper finalist: Tianyun Zhang, Sijia Liu, Yanzhi Wang and Makan Fardad. Generation of Low Distortion Adversarial Attacks via Convex Programming <paper>
  • AdvML'19 is hosting a Robust Malware Detection Challenge with cash prize sponsored by MIT-IBM Watson AI Lab
    • Challenge Summarization Video: https://youtu.be/vEDxqD8IdKI
    • Defense Track Winners: Laurens Bliek, Christian Hammerschmidt, AzqaNadeem, SiccoVerwer
    • Defense Track Winners: ChangmingXu, AnandithaRaghunath, Steven Jorgensen, Karla Mejia
    • Attack Track Winners: Laurens Bliek, Christian Hammerschmidt, AzqaNadeem, SiccoVerwer

Topics of interest include but are not limited to:

  • Adversarial attacks (e.g. evasion, poison and model inversion) and defenses in machine learning and data mining
  • Robustness certification and property verification techniques
  • Representation learning, knowledge discovery and model generalizability
  • Model robustness against model compression (e.g. network pruning and quantization)
  • Generative models and their applications (e.g., generative adversarial nets)
  • Robust optimization methods and (computational) game theory
  • Explainable and fair machine learning models via adversarial learning techniques
  • Transfer learning, multi-agent adaptation, self-paced learning
  • Privacy and security in machine learning systems
  • Trustworthy data mining and machine learning

Workshop Agenda:

8:00 -- 8:50 am Invited Speaker: Le Song (Georgia Institute of Technology). Title: Adversarial Machine Learning for Graph Neural Networks

8:50 -- 10:00 am Poster spotlight talks + best paper award talk +1st poster session + coffee break

10:00 -- 10:50 am Invited Speaker: Yuan (Alan) Qi (Ant Financial). Title: Privacy Preserving Machine Learning for Inclusive Finance

10:50 -- 11:40 am Invited Speaker: Xiaojin (Jerry) Zhu (University of Wisconsin-Madison). Title: Adversarial Machine Learning in Sequential Decision Making

11:40 -- 12:00 am Challenge winner announcement + 2nd poster session

Accepted Papers:

  1. Jen-Tzung Chien and Chun-Lin Kuo. Bayesian Learning for Generative Adversarial Networks <paper> [Best Paper Award Finalist]
  2. Hao Cheng, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Pu Zhao and Xue Lin. Defending against Backdoor Attack on Deep Neural Networks <paper>
  3. Bo Zhang, Boxiang Dong, Hui Wendy Wang and Hui Xiong. Trust-but-Verify: Result Verification of Federated Deep Learning <paper> [Best Paper Award Winner]
  4. Eitan Rothberg, Tingting Chen, Hao Ji and Roger Luo. Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification <paper>
  5. Tianyun Zhang, Sijia Liu, Yanzhi Wang and Makan Fardad. Generation of Low Distortion Adversarial Attacks via Convex Programming <paper> [Best Paper Award Finalist]
  6. Yiming Gu, Kishore Reddy, Michael Giering and Amit Surana. SignalGAN: A Conditional Generative Adversarial Network for Signal Modeling <paper>
  7. Xiaoyun Wang, Minhao Cheng, Joe Eaton, Cho-Jui Hsieh and Felix Wu. Fake Node Attacks on Graph Convolutional Networks <paper>
  8. Garrett Wilson and Diane Cook. Multi-Purposing Domain Adaptation Discriminators for Pseudo Labeling Confidence <paper>
  9. Zhou Yang, Long Nguyen and Fang Jin. Coordinating Disaster Emergency Response with Heuristic Reinforcement Learning <paper>
  10. Arash Rahnama, Andre Nguyen and Edward Raff. Connecting Lyapunov Control Theory to Adversarial Attacks <paper>
  11. Thomas Hogan, Bhavya Kailkhura and Ryan Goldhahn. Universal Decision-Based Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses <paper>
  12. Baohua Sun, Lin Yang, Wenhan Zhang, Michael Lin, Jason Dong, Charles Young and Patrick Dong. Transfer Learning from Vision to Language and Model Compression on a 300mW CNN Accelerator Chip <paper>
  13. Xiao Wang, Siyue Wang, Pin-Yu Chen, Xue Lin and Peter Chin. Block Switching: A Stochastic Approach for Deep Learning Security <paper>
  14. Boli Fang, Miao Jiang and Jerry Shen. Deep Generative Inpainting with Comparative Sample Augmentation <paper>
  15. Owen Levin, Zihang Meng, Vikas Singh and Xiaojin Zhu. Fooling Computer Vision into Inferring the Wrong Body Mass Index <paper>
  16. Djallel Bouneffouf. Corrupted Contextual Bandits: Online learning with Corrupted Contextual <paper>