Date: Monday, May 6

Location: Room R6, Ernest N. Morial Convention Centre, New Orleans.

09:50 - Opening remarks: AdriĆ  Garriga-Alonso (video)

10:00 - Invited talk: Cynthia Rudin, "Interpretability for important problems" (video)

10:30 - Coffee break + Posters

11:30 - Invited talk: Dylan Hadfield-Menell, "Formalizing the Value Alignment Problem in AI" (video)

12:00 - Contributed talk: David Krueger, "Misleading meta-objectives and hidden incentives for distributional shift" (video)

12:20 - Panel discussion: Cynthia Rudin, Catherine Olsson, Himabindu Lakkaraju, Dylan Hadfield-Menell, "Exploring overlaps and interactions between ML safety research areas" (moderator: Silvia Chiappa)

13:10 - Lunch break

14:30 - Break for ICLR invited talk

15:20 - Contributed talk: Beomsu Kim, "Bridging Adversarial Robustness and Gradient Interpretability" (video)

15:40 - Contributed talk: Avraham Ruderman, "Uncovering Surprising Behaviors in Reinforcement Learning via Worst-Case Analysis" (video)

16:00 - Coffee break + Posters

17:00 - Invited talk: Ian Goodfellow, "The case for dynamic defenses against adversarial examples" (video)

17:30 - Panel discussion: Ian Goodfellow, Rohin Shah, Ray Jiang, "Research priorities in ML safety" (moderator: Victoria Krakovna)

18:20 - Closing