Programme for Thursday, February 26 @IASEAI (UNESCO building)
14:15 Introduction by the organizers
14:20 Brandon Amos, Ratip Emin Berker, Avinandan Bose, Edith Elkind, Sonja Kraiczy, Smitha Milli, Maxiimilian Nickel, Ariel Procaccia and Jamelle Watson-Daniels: Inference-Time Social Choice for Democratic Representation of Viewpoints in Large Language Models (slides)
15:00 Suvadip Sana, Martin T. Wells, Moon Duchin and Daniel Brous: Quantitative Relaxations of Arrow's Axioms (slides)
15:30 Sven Neth: Against Optimization (hand-out, figure)
16:00 Coffee Break
16:15 Jobst Heitzig and Ram Potham: Model-Based Soft Maximization of Suitable Metrics of Long-Term Human Power (slides)
16:45 Benjamin Cookson, Nisarg Shah and Ziqi Yu: Unifying Proportional Fairness in Centroid and Non-Centroid Clustering (slides)
17:15 Alexandros Hollender and Sonja Kraiczy: Enforcing Axioms for AI Alignment under Loss-Based Rules (slides)
17:45 Wrap-up
18:00 End of workshop
9:30 Welcome and Introduction
9:40 Rachel Freedman: Short Introduction to AI Ethics and Safety (slides)
9:55 Marcus Pivato: Even Shorter Introduction to Social Choice (slides)
10:05 Vincent Conitzer: An SC4AI Research Agenda (slides)
10:25 Joe Edelman, Oliver Klingefjord, Ryan Lowe: How the Full-Stack Alignment approach relates to Social Choice (slides)
10:35 Yara Kyrychenko: C3AI: Crafting and Evaluating Constitutions for Constitutional AI
10:40 Open discussion: Concrete Problems in AI Calling for a Social Choice Approach (living document with examples here)
11:00 Coffee Break
11:45 Wesley Holliday: Aggregating Safety Preferences for Safeguarded AI Systems (slides)
12:05 Jobst Heitzig: Axiomatization of Human Power Metrics (slides)
12:15 Dominik Peters: Designing Benchmarks About Collective Decision Making (slides, experiment)
12:30 Levin Hornischer and Zoi Terzopoulou: Learning How to Vote with Principles: Axiomatic Insights Into the Collective Decisions of Neural Networks (slides)
12:45 Roberto Rafael Maura Rivero: LLMs + Nash Learning from Human Feedback are Maximal Lotteries (slides)
13:00 Wrap-up
13:15 End of workshop
Traditional or computational social choice that might be relevant in the AI context
Aspects of AI that might profit from some form of collective decision making
Concrete applications of social choice methods in AI
Examples of topics at the intersection of social choice theory and AI ethics/safety include, but are not limited to, the following:
Reinforcement Learning from Collective Human Feedback or multiple teachers
Social Choice Rules for higher-level steering of AI systems and Scalable Oversight
Democratic Inputs to AI, Constitutional AI, Iterated Amplification, Debate, Recursive Reward Modeling, etc.
Mechanism Design for Human-AI/AI-AI cooperation, Human-Compatible AI, corrigibility
Formal representation of values, ethical principles, legal concepts, and possibly incomplete or conflicting preferences
Standardization vs individualization, cooperation vs competition
Individual and collective decision theories under risk, uncertainty, and ambiguity
Scalable software tools for eliciting preferences and collective choice