Day 1: Thu, June 22

Economic impacts/ Session chair: Jacob Steinhardt

09:00-09:15 Jacob Steinhardt Opening Remarks

09:15-10:00 Michael Webb AI and Automation

10:05-10:50 Dhruv Madeka The Value of Human Level AI to Education

Coffee Break

11:05-11:50 Tom Davidson Will AI lead to transformative economic  growth?

12:00-3:15 Lunch/Break

Alignment / Session chair: Amanda Askell

03:15-04:00 Adam Kalai The Consequences of High-Quality Human Simulators for Alignment

Coffee Break

04:15-05:00 Ellie Pavlick Unpacking the “Human” Part of “Human-Level AI”

05:05-05:50 David McAllester Interpretability, Control, and Retrieval Models

06:00-08:00 Dinner


Day 2: Fri, June 23

Security and Policy Implications / Session chair: Michael Webb

09:15-10:00 Nicholas Carlini How could HLAI be used to compromise software systems?

10:05-10:50 Helen Toner Policy approaches to managing risks & harms of AI (including HLAI)

Coffee Break

11:05-11:50 Ajeya Cotra Will HLAI take off “fast” or “slow”?

12:00-01:00 Lunch

01:00-02:25 Break

Imagining Future Systems / Session chair: Irina Rish

02:25-03:10 Jascha Sohl-Dickstein More intelligent agents behave less coherently: the "hot mess" theory of AI misalignment

03:15-04:00 Roger Grosse LLMs, Compression, and Generalization

Coffee Break

Catastrophic Risks / Session chair:  Amanda Askell

04:15-05:00 Richard Ngo TBA

05:05-05:50 Dan Hendrycks An Overview of Catastrophic AI Risks   (paper: An Overview of Catastrophic AI Risks)

06:00-08:00  Buffet Dinner


Day 3: Sat, June 24

Forecasting / Session chair : David Duvenaud

09:15-10:00 Tamay Berisoglu Projecting future computational resources, data, and algorithms for ML

10:05-10:50 Jacob Steinhardt What will GPT-2030 Look Like?

Coffee Break

11:05-11:50 Jonathan Mann Forecasting Existential Risk

12:00-01:00 Lunch

01:00-01:25 Break

Human-Compatibility / Session chair: Sham Kakade

01:25-02:10 Fernanda Viegas Exploring AI Dashboard Design: The System Model and the User Model

Coffee Break

02:25-03:10 Irina Rish On Scaling Laws, Emergent Behavior, and AI Democratization

03:15-04:00 Dean Foster Using the Principal / Agent problem from Game Theory to think about Alignment

04:10-04:40 Lightning Talks:     David Krueger Safe... and Trustworthy... and Responsible AI

Jacob Andreas  High-Recall Interpretability

David Bau  How can we avoid a decades-long research delay in interpretable ML?

Adam Kalai   Intelligence Dimmer

Jacob Steinhardt The Future is (Very) Uncertain

Yejin Choi   An Experimental Thought: Aligning AI Through Human Norms, Morals and Values

Stella Biderman Transformer Math

Jascha Sohl-Dickstein Adversarial examples influence both human and machine perception

06:00-08:00 Buffet Dinner