9:15 - 9:30 am

Introduction

by Organizers [slides]

9:30 - 10:00 am

The promise and pitfalls of generative AI 

by Kathleen Fisher (DARPA) [slides]

10:00 - 10:30 am

Can AI-Generated text be reliably detected?

by Soheil Feizi (U of Maryland) [slides]

10:30 - 11:00 am

Watermarking Image Generator Models

by Florian Kerschbaum (U of Waterloo) [slides]

11:00 - 11:30 am

Robustness and Security Challenges in Large Language Models

by Tatsu Hashimoto (Stanford) [slides]

11:30 - 12:00 pm

 An overview of catastrophic risks from generative AI

by Dan Hendrycks (safe.ai) [slides]

12:00 - 1:15 pm     Lunch

1:15 - 1:45 pm

Exploiting programmatic behavior of LLMs for dual-use

by Daniel Kang (UIUC) [slides]

1:45 - 2:15 pm

Baseless speculation on the future of  automating attacks with generative models

by Nicholas Carlini (Google) 

2:15 - 2:45 pm

Panel Question: What attacks become easy to mount due to GenAI technologies?

Panel Lead: Anupam Datta (Truera)

Panelists: Brad Chen (Google), Khawaja Shams (Google), Zuilfikar Ramzan (Aura Labs), Bradley Boyd (Stanford)

2:45 - 3:00 pm     Break

3:00 - 3:30 pm

Challenges and progress towards socially responsible GenAI

by Diyi Yang (Stanford) [slides]

3:30 - 4:00 pm

Grounding for LLMs

by Ankur Taly (Google)

4:00 - 4:45 pm

Panel Question:  What are some current and emerging technologies we should pay attention to for designing countermeasures?

Panel Lead: John Mitchell (Stanford)

Panelists: Eric Mitchell (Stanford), Elie Bursztein (Google), Dawn Song (UC Berkeley), Clark Barrett (Stanford)