The Program

9:00 

Welcome Remarks

9:05 

Keynote

Eric Wallace (OpenAI) "Making “GPT-Next” Secure Through Data and Systems Guardrails"

I’ll talk about three recent directions from OpenAI to make our next-generation of models more responsible, trustworthy, and secure. First, I will briefly outline the “Media Manager”—a tool to enable content owners to specify how they want their works to be included/excluded from AI training. Next, I will do a deep dive on prompt injections and how we can mitigate them by teaching LLMs to follow instructions in a hierarchal manner. Finally, I will discuss the tensions that exist between developer access and security, whereby providing access to LM output probabilities can allow adversaries to reveal the hidden size of black-box models.
Eric Wallace is a research scientist at OpenAI, where he studies the theory and practice of building trustworthy, secure, and private machine learning models. He did his PhD work at UC Berkeley, where he was supported by the Apple Scholars in AI Fellowship and had his research recognized by various awards (EMNLP, PETS). Prior to OpenAI, Eric interned at Google Brain, AI2, and FAIR.

10:00 - 10:30

☕️ Morning Break

10:30

GenAI Defenses

11:20

Keynote

Sven Cattell (nbhd.ai) "What Generative AI can Learn from Traditional Security"
[slides]

Sven founded the AI Village in 2018 and has been running it ever since. He was the principal organizer of AIV’s Generative Red Team at DEFCON 31. Sven is also the founder of nbhd.ai, a startup focused on the security and integrity of datasets and the AI they build. He was previously a senior data scientist at Elastic where he built the malware model training pipeline. He has a PhD in Algebraic Topology, and a postdoc in geometric machine learning where he focused on anomaly and novelty detection.

12:15 - 1:00

🍽️ Lunch

1:00

Joint Keynote with DLSP 2024

David Wagner (University of California, Berkeley) "TBA"

2:05

GenAI Attacks

2:30 - 3:00

☕️ Afternoon Break

3:00

Joint Keynote with DLSP 2024

Nicholas Carlini (Google DeepMind) "TBA"

4:05

GenAI Privacy

4:30

Panel

John McShane (Synopsys) "Shift left with AI to reduce developer overload”

John McShane is a cybersecurity expert with more than 10 years of experience in the IoT engineering and cybersecurity field. John has extensive experience in Artificial Intelligence, fuzz testing, automotive engineering, and cybersecurity testing. He has multiple issued cybersecurity testing patents.  He received his bachelor’s in automotive technology from Southern Illinois and has a master’s in cybersecurity from Eastern Michigan University with a focus on AI. As a Principal Product Manager of AI at Synopsys, he specializes in developing AI-driven solutions for application security testing and the safe and secure use of AI.

Andrew Davis (HiddenLayer)

5:15

Closing Remarks