2023 AdvML Rising Star Award
Year 2023 AdvML Rising Star Awardees
Award talks and ceremonies will take place at The 2nd Workshop on New Frontiers in Adversarial Machine Learning colocated at ICML 2023 (ADVML FRONTIERS @ICML2023)
Talk Title: How Does an Appropriate Sparsity Benefit Robustness?
Abstract: Deep neural networks (DNNs) are notoriously vulnerable to various types of threats, including natural corruptions, adversarial attacks, and Trojan attacks. We provide effective solutions to address these vulnerabilities from the perspective of network topology. Our research focuses on identifying suitable sparsity patterns that can act as implicit regularization during robust training. During this presentation, I will delve into the benefits of sparsity in overcoming robust overfitting and achieving superior robust generalization. Additionally, I will demonstrate how sparsity can serve as an efficient detector to uncover maliciously injected Trojan patterns. Lastly, I will introduce a novel form of sparsity and highlight its contributions to certified robustness.
Talk Title: Uncovering and Mitigating Privacy Leakage in Large-scale Generative Models
Abstract: In our pursuit of AGI, it has become increasingly crucial to train generative models on extensive data repositories. Such vast datasets often contain sensitive private information, including personally identifiable information about individual entities, which could potentially be leaked by generative models trained on these datasets. In fact, it has been observed that large-language models can inadvertently memorize personally identifiable information from their training data.
In my talk, I will first demonstrate that a surprisingly high degree of similar memorization of training images, i.e., generation of images that are pixel-level similar to training images, also happens in large-scale visual-language diffusion models (such as stable-diffusion and ImageN). Next, I’ll present our approach towards provable privacy-preserving generation of high-fidelity synthetic images from diffusion models. Our approach achieves the simultaneous goal of strong privacy and high utility by specifically tailoring the differential privacy mechanism to the prevalent black-box API based threat model in generative AI. I will conclude the talk with a discussion on broader implications of proposed methods and future direction in privacy-preserving generative AI.
Objective
At the 2023 ICML AdvML-Frontiers workshop, two rising star awards will be given to young researchers who have made significant contributions and research advances in adversarial machine learning, with a specific emphasis on robustness and security of machine learning systems. The applications will be reviewed by AdvML’s award committee. The awardees will give a presentation about their research works at the ICML AdvML-Frontiers workshop in July 2023. We encourage researchers from minority or underrepresented groups to apply.
Domain of Interest
We encourage researchers working on the following research topics to apply:
Adversarial attacks and defenses in machine learning and data mining
Provably robust machine learning methods and systems
Robustness certification and property verification techniques
Trustworthy machine learning and AI ethics
Machine learning under adversarial settings
Generative models and their applications (e.g., generative adversarial nets)
Robust optimization methods and (computational) game theory
Privacy and security in machine learning systems
Novel applications and innovations using adversarial machine learning
Eligibility and Requirements
Senior PhD students enrolled in a PhD program before December 2020 or researchers holding postdoctoral positions who obtained PhD degree after April 2021
Applicants are required to submit the following materials:
CV (including a list of publications)
Research statement (up to 2 pages, single column, excluding reference), including your research accomplishments and future research directions
A 5-minute video recording for your research summary
Two letters of recommendation uploaded to this form by the referees before June 2nd, 2023 (aoe)
The awardee must attend the ICML AdvML-Frontiers workshop and give a presentation
Submit the required materials a,b,c to CMT by May 26th, 2023 (aoe)
Past AdvML Rising Star Awardees
Year 2022
Talk Title: Enabling Certifiable Deep Learning for Large-Scale Models towards Real-World Requirements
Year 2021
Talk Title: Does Adversarial Machine Learning Research Matter?
Talk Title: Unboxing the Black-box: A Quest for Scalable and Powerful Neural Network Verifiers
*Please email Pin-Yu Chen <pinyuchen.tw@gmail.com> for any inquiries