Preliminary Programme for Thursday February 26 @IASEAI (UNESCO building):
14:15 Introduction by the organizers
14:20 Brandon Amos, Ratip Emin Berker, Avinandan Bose, Edith Elkind, Sonja Kraiczy, Smitha Milli, Maxiimilian Nickel, Ariel Procaccia and Jamelle Watson-Daniels: Inference-Time Social Choice for Democratic Representation of Viewpoints in Large Language Models
15:00 Suvadip Sana, Martin T. Wells, Moon Duchin and Daniel Brous: Quantitative Relaxations of Arrow's Axioms
15:30 Sven Neth: Against Optimization
16:00 Coffee Break
16:15 Jobst Heitzig and Ram Potham: Model-Based Soft Maximization of Suitable Metrics of Long-Term Human Power
16:45 Benjamin Cookson, Nisarg Shah and Ziqi Yu: Unifying Proportional Fairness in Centroid and Non-Centroid Clustering
17:15 Alexandros Hollender and Sonja Kraicy: Enforcing Axioms for AI Alignment under Loss-Based Rules
17:45 Wrap-up
18:00 End of workshop
Traditional or computational social choice that might be relevant in the AI context
Aspects of AI that might profit from some form of collective decision making
Concrete applications of social choice methods in AI
Examples of topics at the intersection of social choice theory and AI ethics/safety include, but are not limited to, the following:
Reinforcement Learning from Collective Human Feedback or multiple teachers
Social Choice Rules for higher-level steering of AI systems and Scalable Oversight
Democratic Inputs to AI, Constitutional AI, Iterated Amplification, Debate, Recursive Reward Modeling, etc.Â
Mechanism Design for Human-AI/AI-AI cooperation, Human-Compatible AI, corrigibility
Formal representation of values, ethical principles, legal concepts, and possibly incomplete or conflicting preferences
Standardization vs individualization, cooperation vs competition
Individual and collective decision theories under risk, uncertainty, and ambiguity
Scalable software tools for eliciting preferences and collective choice