2023 AdvML Rising Star Award
Talk Title: How Does an Appropriate Sparsity Benefit Robustness?
Abstract: Deep neural networks (DNNs) are notoriously vulnerable to various types of threats, including natural corruptions, adversarial attacks, and Trojan attacks. We provide effective solutions to address these vulnerabilities from the perspective of network topology. Our research focuses on identifying suitable sparsity patterns that can act as implicit regularization during robust training. During this presentation, I will delve into the benefits of sparsity in overcoming robust overfitting and achieving superior robust generalization. Additionally, I will demonstrate how sparsity can serve as an efficient detector to uncover maliciously injected Trojan patterns. Lastly, I will introduce a novel form of sparsity and highlight its contributions to certified robustness.
Talk Title: Uncovering and Mitigating Privacy Leakage in Large-scale Generative Models
Abstract: In our pursuit of AGI, it has become increasingly crucial to train generative models on extensive data repositories. Such vast datasets often contain sensitive private information, including personally identifiable information about individual entities, which could potentially be leaked by generative models trained on these datasets. In fact, it has been observed that large-language models can inadvertently memorize personally identifiable information from their training data.
In my talk, I will first demonstrate that a surprisingly high degree of similar memorization of training images, i.e., generation of images that are pixel-level similar to training images, also happens in large-scale visual-language diffusion models (such as stable-diffusion and ImageN). Next, I’ll present our approach towards provable privacy-preserving generation of high-fidelity synthetic images from diffusion models. Our approach achieves the simultaneous goal of strong privacy and high utility by specifically tailoring the differential privacy mechanism to the prevalent black-box API based threat model in generative AI. I will conclude the talk with a discussion on broader implications of proposed methods and future direction in privacy-preserving generative AI.
At the 2023 ICML AdvML-Frontiers workshop, two rising star awards will be given to young researchers who have made significant contributions and research advances in adversarial machine learning, with a specific emphasis on robustness and security of machine learning systems. The applications will be reviewed by AdvML’s award committee. The awardees will give a presentation about their research works at the ICML AdvML-Frontiers workshop in July 2023. We encourage researchers from minority or underrepresented groups to apply.
Domain of Interest
Eligibility and Requirements
Past AdvML Rising Star Awardees
*Please email Pin-Yu Chen <email@example.com> for any inquiries