Schedule
Saturday, Dec 14, 2024 (Location: Eastern Meeting Rooms 19&20)
8:45-9:00am: Welcome
9:00-9:30am: Invited talk: Andreea Bobu (MIT)
Learning a Lot from a Little: How Structure Enables Efficient and Human-Aligned Robot Learning
9:30-10:00am: Invited talk: Michael Bernstein (Stanford)
Interactive Simulacra of Human Attitudes and Behavior
10:00-10:30am: Spotlight talks
PAL: Pluralistic Alignment Framework for Learning from Heterogeneous Preferences (Daiwei Chen, Yi Chen, Aniket Rege, Ramya Korlakai Vinayak)
The Double-Edged Sword of Behavioral Responses in Strategic Classification (Raman Ebrahimi, Kristen Vaccaro, Parinaz Naghizadeh)
Superficial Alignment, Subtle Divergence, and Nudge Sensitivity in LLM Decision-Making (Manuel Cherep, Nikhil Singh, Patricia Maes)
Words that work: Using language to generate hypotheses (Rafael M. Batista, James Ross)
Learning to Cooperate with Humans using Generative Agents (Yancheng Liang, Daphne Chen, Abhishek Gupta, Simon Shaolei Du, Natasha Jaques)
Meaning Through Motion: DUET – A Multimodal Dataset for Kinesics Analysis in Dyadic Activities (Cheyu Lin, Katherine A. Flanigan, Sirajum Munir)
10:30-10:45am: Coffee break
10:45-11:45am: Panel (Michael Bernstein, Andreea Bobu, Tom Griffiths, Hoda Heidari, Hannah Rose Kirk, Jon Kleinberg, Sendhil Mullainathan, moderated by Katie Collins)
11:45am-1pm: Lunch break
1:00-1:30pm: Invited talk: Tom Griffiths (Princeton)
Combining theory and data to predict and explain human decisions
1:30-2:00pm: Invited talk: Sendhil Mullainathan (MIT)
Misunderstandings
2:00-3:30pm: Poster session
3:30-3:45pm: Coffee break
3:45-4:15pm: Invited talk: Hoda Heidari (CMU)
Examining large language models as qualitative research participants
4:15-4:45pm: Invited talk: Hannah Rose Kirk (Oxford)
Putting the H Back in RLHF: Challenging assumptions of human behaviour for AI alignment
4:45-5:00pm: Concluding remarks