2022 AdvML Rising Star Award

Talk Title: How much can we trust large language models?

Abstract: Large language Models (LLMs, e.g., GPT-3, TNLG, T-5) are shown to have a remarkably high performance on standard benchmarks, due to their high parameter count, extremely large training datasets, and significant compute. Although the high parameter count in these models leads to more expressiveness, it can also lead to higher memorization, which, coupled with large unvetted, web-scraped datasets can cause multiple different negative societal and ethical impacts: leakage of private, sensitive information— i.e. LLMs are ‘leaky’, generation of biased text—i.e. LLMs are ‘sneaky, and generation hateful or stereotypical text— i.e. LLMs are ‘creepy’. In this talk, I will go over how the issues mentioned above affect the trustworthiness of LLMs, and zoom in on how we can measure the leakage and memorization of these models. Finally I will discuss what it would actually mean for large LLMs to be privacy preserving, and what are the future research directions on making large models trustworthy.


Talk Title: Enabling Certifiable Deep Learning for Large-Scale Models towards Real-World Requirements

Abstract: Given the rising security concerns for modern deep learning systems in deployment, designing certifiable large-scale deep learning systems for real-world requirements is in urgent demand. This talk introduces our series of work on constructing certifiable large-scale deep learning systems for real-world requirements, achieving robustness against Lp perturbations, semantic transformations, poisoning attacks, distributional shifts; fairness; and reliability against numerical defects. Then, I will share core methodologies for designing certifiable deep learning systems including diversity-enabled training, efficient model abstraction, threat-model-dependent smoothing, and precise worse-case characterization. At the end of the talk, I will summarize several challenges that impede the large-scale deployment of certifiable deep learning and future directions.


Objective

At the 2022 AdvML workshop, two rising star awards will be given to young researchers who have made significant contributions and research advances in adversarial machine learning, with a specific emphasis on robustness and security of machine learning systems. The applications will be reviewed by AdvML’s award committee. The awardees will give a 30-minute presentation about their research works at the AdvML workshop in August 2022. We encourage researchers from minority or underrepresented groups to apply.

Domain of Interest

We encourage researchers working on the following research topics to apply:

  • Adversarial attacks and defenses in machine learning and data mining

  • Provably robust machine learning methods and systems

  • Robustness certification and property verification techniques

  • Trustworthy machine learning and AI ethics

  • Machine learning under adversarial settings

  • Generative models and their applications (e.g., generative adversarial nets)

  • Robust optimization methods and (computational) game theory

  • Privacy and security in machine learning systems

  • Novel applications and innovations using adversarial machine learning

Eligibility and Requirements

  1. Senior PhD students enrolled in a PhD program before December 2019 or researchers holding postdoctoral positions who obtained PhD degree after April 2020

  2. Applicants are required to submit the following materials:

    • CV (including a list of publications)

    • Research statement (up to 2 pages, single column, excluding reference), including your research accomplishments and future research directions

    • A 5-minute video recording for your research summary

    • Two letters of recommendation uploaded to this form by the referees before July 1st, 2022

  3. The awardee must attend the AdvML 2022 workshop and give a 30-minute presentation

  4. Submit the required materials a,b,c to CMT by June 24th, 2022

Past AdvML Rising Star Awardees

Year 2021

Talk Title: Does Adversarial Machine Learning Research Matter?

Talk Title: Unboxing the Black-box: A Quest for Scalable and Powerful Neural Network Verifiers

*Please email Pin-Yu Chen <pinyuchen.tw@gmail.com> for any inquiries