Preliminary Programme for Thursday, February 26 @IASEAI (UNESCO building):
14:15 Introduction by the organizers
14:20 Brandon Amos, Ratip Emin Berker, Avinandan Bose, Edith Elkind, Sonja Kraiczy, Smitha Milli, Maxiimilian Nickel, Ariel Procaccia and Jamelle Watson-Daniels: Inference-Time Social Choice for Democratic Representation of Viewpoints in Large Language Models
15:00 Suvadip Sana, Martin T. Wells, Moon Duchin and Daniel Brous: Quantitative Relaxations of Arrow's Axioms
15:30 Sven Neth: Against Optimization
16:00 Coffee Break
16:15 Jobst Heitzig and Ram Potham: Model-Based Soft Maximization of Suitable Metrics of Long-Term Human Power
16:45 Benjamin Cookson, Nisarg Shah and Ziqi Yu: Unifying Proportional Fairness in Centroid and Non-Centroid Clustering
17:15 Alexandros Hollender and Sonja Kraicy: Enforcing Axioms for AI Alignment under Loss-Based Rules
17:45 Wrap-up
18:00 End of workshop
Preliminary Programme for Friday, February 27:
9:30 Welcome and Introduction
9:40 Rachel Freedman: Short Introduction to AI Ethics and Safety
9:55 Marcus Pivato: Even Shorter Introduction to Social Choice
10:05 Vincent Conitzer: An SC4AI Research Agenda
10:25 Joe Edelman, Oliver Klingefjord, Ryan Lowe: How the Full-Stack Alignment approach relates to Social Choice
10:35 Yara Kyrychenko: C3AI: Crafting and Evaluating Constitutions for Constitutional AI
10:40 Open discussion: Concrete Problems in AI Calling for a Social Choice Approach
11:00 Coffee Break, potentially with Posters
11:30 Wesley Holliday and Sonja Kraiczy: Aggregating Safety Preferences for Safeguarded AI Systems
11:50 Jobst Heitzig: Axiomatization and Learning of Human Power Metrics
12:00 Dominik Peters: Designing Benchmarks About Collective Decision Making
12:10 Adam Lesnikowski: short input (TBA)
12:15 Levin Hornischer and Zoi Terzopoulou: Learning How to Vote with Principles: Axiomatic Insights Into the Collective Decisions of Neural Networks
12:30 Roberto Rafael Maura Rivero: LLMs + Nash Learning from Human Feedback are Maximal Lotteries
12:45 Wrap-up
13:00 End of workshop
Traditional or computational social choice that might be relevant in the AI context
Aspects of AI that might profit from some form of collective decision making
Concrete applications of social choice methods in AI
Examples of topics at the intersection of social choice theory and AI ethics/safety include, but are not limited to, the following:
Reinforcement Learning from Collective Human Feedback or multiple teachers
Social Choice Rules for higher-level steering of AI systems and Scalable Oversight
Democratic Inputs to AI, Constitutional AI, Iterated Amplification, Debate, Recursive Reward Modeling, etc.
Mechanism Design for Human-AI/AI-AI cooperation, Human-Compatible AI, corrigibility
Formal representation of values, ethical principles, legal concepts, and possibly incomplete or conflicting preferences
Standardization vs individualization, cooperation vs competition
Individual and collective decision theories under risk, uncertainty, and ambiguity
Scalable software tools for eliciting preferences and collective choice