9AM GMT (6AM Rio de Janeiro, 9AM London, 10AM Paris, 12PM Istanbul, 2:30PM New Delhi, 6PM Tokyo/Seoul, 8PM Sydney, 10PM Auckland)
Kaname Miyagishima (Aoyama Gakuin University) "Cautious and fair social evaluation under risk"
Host: Marcus Pivato
Abstract. In this paper, we examine the social evaluation of resource distributions under risk, assuming preferences are ordinal and non-comparable. We investigate cautious evaluations that respect ex-post equity in two aspects. First, we analyze attitudes of social evaluation criteria towards risk and inequality, specifically identifying conditions under which society exhibits risk aversion and higher levels of risk and inequality aversion. Second, we characterize a class of social evaluation criteria using cautious expected utility theory developed by Cerreia-Vioglio et al. (2015).
2PM GMT (9AM New York, 11AM Rio de Janeiro, 2PM Bristol, 3PM Paris, 5PM Istanbul, 7:30PM New Delhi, 11PM Tokyo/Seoul)
Richard Pettigrew (University of Bristol) "On Self-Undermining Decision Theories"
Host: Marcus Pivato
Abstract. A decision theory is self-undermining if, when you ask it which decision theory you should use, it does not consider itself to be among the rationally permissible ones. We distinguish different versions of this property and show that all of the proposed decision theories for imprecise credences are self-undermining in one sense, while the most discussed risk-sensitive decision theories are self-undermining in a slightly different sense. We argue that, in both cases, this gives some reason to reject these theories, though there are escape routes for their proponents.
(Joint work with Catrin Campbell-Moore and Jason Konek)
2PM GMT (9AM East Lansing, 11AM Rio de Janeiro, 2PM London, 3PM Paris, 5PM Istanbul, 7:30PM New Delhi, 11PM Tokyo/Seoul)
Jon X. Eguia (Michigan State University) "Efficiency in Collective Decision-Making via Quadratic Transfers"
Host: Marcus Pivato
Abstract. A group of agents with privately known preferences must choose one of several alternatives. We study the following collective-choice mechanism: every agent can express her intensity of support or opposition to each alternative, by transferring to the rest of the agents wealth equal to the square of the intensity expressed, and the outcome is determined by the net sums of the expressed intensities. We prove that as the group grows large, in every equilibrium of this quadratic-transfers mechanism (QTM), each agent’s transfer converges to zero, and the probability that the efficient outcome is chosen converges to one.
(Joint work with Nicole Immorlica, Steven P. Lalley, Katrina Ligett, E. Glen Weyl and Dimitrios Xefteris)
2PM GMT (8AM New Orleans, 11AM Rio de Janeiro, 2PM London, 3PM Paris, 5PM Istanbul, 7:30PM New Delhi, 11PM Tokyo/Seoul)
Nick Mattei (Tulane University) "Leveraging Data and Artificial Intelligence for Human Centered Computational Reasoning and Choice"
Host: Lirong Xia
Abstract. In recent years there has been an explosion in interest in topics that sit at the intersection of applications of computing technology and societal issues. There has been significant work in the academic, industrial, and policy spaces to clarify and formalize best practices regarding the deployment of computational decision making (e.g., artificial intelligence and machine learning) at scale. Part of this work has been a newfound interest in many age old conversations about the roles and limits of technology and society. In this talk I’ll give an overview of recent work covering several projects that use tools from computational social choice, data science, and artificial intelligence more generally to build systems and algorithms that are context aware, flexible, and human-centered. This work includes multi-stakeholder recommender systems that use social choice to balance competing fairness and efficiency objectives and online decision making systems that use (inverse) reinforcement learning and human cognitive models for making choices in complex environments.
(Password: %2E@&33!)
2PM GMT (9AM Toronto, 11AM Rio de Janeiro, 2PM London, 3PM Paris, 5PM Istanbul, 7:30PM New Delhi, 11PM Seoul/Tokyo)
Matthieu Hervouin (Université Paris Dauphine) "Anonymity and Neutrality in Classification Aggregation"
Host: Jobst Heitzig
Abstract. Based on previous results in Preference Aggregation, we explore the possibility of defining anonymous and neutral aggregators in Classification Aggregation. We find a necessary condition on the number of individuals, objects and categories for the existence of anonymous and neutral aggregators and propose such aggregators. We prove that this condition is tight for an equal number of individuals and objects. Experimental evidence suggests this tightness extends to cases with different numbers of individuals and objects, though this remains a conjecture requiring formal proof.
2PM GMT (10AM Toronto/Montreal, 11AM Rio de Janeiro, 2PM London, 3PM Aarhus, 5PM Istanbul, 7:30PM New Delhi, 11PM Seoul/Tokyo)
Ioannis Caragiannis (Aarhus University) "Quantile Agent Utility and Implications to Randomized Social Choice"
Host: Marcus Pivato
Abstract. We initiate a novel direction in randomized social choice by proposing a new definition of agent utility for randomized outcomes. Each agent has a preference over all outcomes and a quantile parameter. Given a lottery over the outcomes, an agent gets utility from a particular representative, defined as the least preferred outcome that can be realized so that the probability that any worse-ranked outcome can be realized is at most the agent’s quantile value. In contrast to other utility models that have been considered in randomized social choice (e.g., stochastic dominance, expected utility), our quantile agent utility compares two lotteries for an agent by just comparing the representatives, as is done for deterministic outcomes. We revisit questions in randomized social choice using the new utility definition. We study the compatibility of efficiency and strategyproofness for randomized voting rules, efficiency and fairness for randomized one-sided matching mechanisms, and efficiency, stability, and strategyproofness for lotteries over two-sided matchings. In contrast to well-known impossibilities in randomized social choice, we show that satisfying the above properties simultaneously can be possible.
Joint work with Sanjukta Roy (Indian Statistical Institute)
9AM GMT (5AM Toronto/Montreal, 6AM Rio de Janeiro, 10AM London, 11AM Paris, 12PM Istanbul, 2:30PM Kolkata, 6PM Tokyo/Seoul, 8PM Sydney, 10PM Auckland)
Souvik Roy (Indian Statistical Institute) "Aggregation of Choice Functions on Non-Rational Domains"
Host: Marcus Pivato
Abstract. We consider the problem of aggregating choice functions satisfying the Pareto axiom and Independence of Irrelevant Alternatives (IIA) on a class of domains. We first show that when agents have rational choice functions on all the binary sets (that is, all subsets of cardinality two), a choice aggregator satisfies the Pareto axiom and IIA if and only if it is a dictatorial choice aggregator. Next, we consider two domains of non-rational choice functions: limitedly rational domains and partially rational domains, and provide the structure of choice aggregators satisfying the Pareto axiom and IIA.
9AM GMT (5AM Toronto/Montreal, 6AM Rio de Janeiro, 10AM London, 11AM Paris, 12PM Istanbul, 2:30PM New Delhi, 6PM Tokyo/Seoul, 7PM Sydney, 9PM Auckland)
Susumu Cato (University of Tokyo) "Population ethics with thresholds"
Host: Marcus Pivato
Abstract. We propose a new class of social quasi-orderings in a variable-population setting. In order to declare one utility distribution at least as good as another, the critical-level utilitarian value of the former must reach or surpass the value of the latter. For each possible absolute value of the difference between the population sizes of two distributions to be compared, we specify a non-negative threshold level and a threshold inequality. This inequality indicates whether the corresponding threshold level must be reached or surpassed in the requisite comparison. All of these threshold critical-level utilitarian quasi-orderings perform same-number comparisons by means of the utilitarian criterion. In addition to this entire class of quasi-orderings, we axiomatize two important subclasses. The members of the first subclass are associated with proportional threshold functions, and the well-known critical-band utilitarian quasi-orderings are included in this subclass. The quasi-orderings in the second subclass employ constant threshold functions; the members of this second class have, to the best of our knowledge, not been examined so far. Furthermore, we characterize the members of our class that (i) avoid the repugnant conclusion; (ii) avoid the sadistic conclusions; and (iii) respect the mere-addition principle.
(Joint work with Walter Bossert and Kohei Kamaga)
8PM GMT (2PM Mexico City, 4PM Toronto/Montréal, 5PM Rio de Janeiro, 9PM London, 10PM Munich, 11PM Istanbul, 6AM Wednesday in Sydney, 8AM Wednesday in Auckland)
Christian List (Munich Center for Mathematical Philosophy, Ludwig-Maximilians-Universität München) "The impossibility of non-manipulable probability aggregation"
Host: Marcus Pivato
Abstract. A probability aggregation rule assigns to each profile of probability functions across a group of individuals (representing their individual probability assignments to some propositions) a collective probability function (representing the group's probability assignment). The rule is “non-manipulable” if no group member can manipulate the collective probability for any proposition in the direction of his or her own probability by misrepresenting his or her probability function (“strategic voting”). We show that, except in trivial cases, no probability aggregation rule satisfying two very mild conditions (non-dictatorship and consensus preservation) is non-manipulable.
(Joint work with Franz Dietrich)
2PM GMT (10AM Toronto/Montréal, 11AM San Luis, 3PM London, 4PM Paris, 5PM Istanbul, 7:30PM New Delhi, 11PM Tokyo/Seoul)
Pablo Arribillaga (Universidad Nacional de San Luis) "Obvious Strategy-proofness with Respect to a Partition"
Host: : Danilo Coelho
Abstract. We define and study obvious strategy-proofness with respect to a partition of the set of agents. It encompasses strategy-proofness as a special case when the partition is the coarsest one and obvious strategy-proofness when the partition is the finest. For any partition, it falls between these two extremes. We establish two general properties of this new notion and apply it to the simple voting problem with two alternatives and strict preferences.
(Joint work with Jordi Massó and Alejandro Neme)
Video recording (TBA)
9PM GMT (3PM Mexico City, 5PM Toronto/Montréal, 6PM Rio de Janeiro, 10PM London, 11PM Paris, 12AM Istanbul, 6AM Wednesday in Tokyo/Seoul, 7AM Wednesday in Sydney, 9AM Wednesday in Auckland)
Kensei Nakamura (Hitotsubashi University) "When is it (im)possible to respect all individuals' preferences under uncertainty?"
Host: Marcus Pivato
Abstract. When aggregating Subjective Expected Utility preferences, an impossibility result is derived from the Pareto principle unless the individuals have a common belief. This paper examines the source of this impossibility in more detail by considering the aggregation of a general class of incomplete preferences that can represent gradual ambiguity perceptions. Our result shows that the planner cannot avoid ignoring some individuals unless there is a probability distribution that all individuals unanimously think to be most plausible. That is, even if the individuals have similar ambiguity perceptions, the impossibility holds as long as some individual's most plausible belief is slightly different from others.
9AM GMT (5AM Toronto/Montréal, 6AM Rio de Janeiro, 10AM London, 11AM Paris, 12PM Istanbul, 2:30PM New Delhi, 6PM Tokyo/Seoul, 7PM Sydney, 9PM Auckland)
Noriaki Kiguchi (Institute of Economic Research, Kyoto University) "Collective State Spaces"
Host: Marcus Pivato
Abstract. This paper aggregates individuals’ preferences over menus of lotteries into social preferences. Specifically, we consider the case where both individuals and society have preferences for flexibility (Dekel et al., 2001). That is, they face uncertainty regarding future states that determine their tastes over lotteries, leading them to prefer larger menus at the current time. Each individual’s future tastes are influenced by different aspects of future states, implying that each has their own state space. This paper axiomatically characterizes how society should construct a collective state space, which represents the entire set of factors influencing society’s future tastes. All of our axioms are motivated by discussions on the Pareto principle.
9PM GMT (2PM Vancouver, 5PM Boston, 6PM Rio de Janeiro, 10PM London, 11PM Paris, 12AM Istanbul, 6AM Wednesday in Seoul, 9AM Wednesday in Auckland)
Florian Mudekereza (Massachusetts Institute of Technology) "Robust Aggregation of Preferences"
Host: Marcus Pivato
Abstract. This paper analyzes a society composed of individuals who have diverse sets of beliefs (or models) and diverse tastes (or utility functions). It characterizes the model selection process of a social planner who wishes to aggregate individuals' beliefs and tastes but is concerned that their beliefs are misspecified (or distorted). A novel impossibility result emerges: a utilitarian social planner who seeks robustness to misspecification never aggregates individuals' beliefs but instead behaves systematically as a dictator by selecting a single individual's belief. This tension between robustness and aggregation exists because aggregation yields policy-contingent beliefs, which are very sensitive to policy outcomes. Restoring possibility of belief aggregation requires individuals to have heterogeneous tastes and some common beliefs. This analysis reveals that misspecification has significant economic implications for welfare aggregation. These implications are illustrated in treatment choice, asset pricing, and dynamic macroeconomics.
9PM GMT (2PM San Francisco, 5PM Toronto/Montréal, 6PM Rio de Janeiro, 10PM London, 11PM Paris, 12AM Istanbul, 6AM Wednesday in Seoul, 9AM Wednesday in Auckland)
Daniel Halpern (Google Research) "A Social Choice Perspective on AI Alignment"
Host: Marcus Pivato
Abstract. Consider the problem of aligning large language models (LLMs) with human values. The standard approach begins with pairwise comparisons from users of the form "between these two outputs to the prompt, which do you prefer?" This response data is aggregated into a reward function, giving numerical scores to outputs, which is subsequently used to steer an existing LLM toward higher-reward answers. This process is essential for making LLMs helpful while avoiding dangerous or biased responses.
However, this paradigm faces a fundamental challenge: people often disagree on what constitutes a "better" output. What, then, should we do when faced with diverse and conflicting preferences?
This talk explores two approaches to this challenge rooted in social choice theory. First, we take an axiomatic perspective, arguing that the process of learning reward functions should satisfy minimal requirements such as Pareto Optimality: if all users unanimously prefer one outcome to another, the aggregated reward function should reflect this. We show that current alignment methods necessarily violate these basic axioms. In contrast, we provide a proof-of-concept aggregation rule that is guaranteed to satisfy them. Second, we explore a more radical approach: representing, rather than resolving, disagreement. Instead of training a single LLM, we train an ensemble, analogous to multi-winner voting systems. We introduce a novel criterion, pairwise calibration, inspired by proportionality. Together, these approaches provide a principled foundation for building AI systems aligned with the pluralism of human values.
(Joint work with Luise Ge, Evi Micha, Ariel D. Procaccia, Itai Shapira, Yevgeniy Vorobeychik, and Junlin Wu.)
5PM GMT (10AM San Francisco, 1PM Toronto/Montréal, 2PM Rio de Janeiro, 6PM London, 7PM Berlin, 8PM Istanbul, 10:30PM New Delhi)
Jobst Heitzig (Potsdam Institute for Climate Impact Research) "Model-Based Soft Maximization of Suitable Metrics of Long-Term Human Power"
Host: Marcus Pivato
Abstract. Power is a key concept in AI safety: power-seeking as an instrumental goal, sudden or gradual disempowerment of humans, power balance in human-AI interaction and international AI governance. At the same time, power as the ability to pursue diverse goals is essential for wellbeing.
This paper explores the idea of promoting both safety and wellbeing by forcing AI agents explicitly to empower humans and to manage the power balance between humans and AI agents in a desirable way. Using a principled, partially axiomatic approach, we design a parameterizable and decomposable objective function that represents an inequality- and risk-averse long-term aggregate of human power. It takes into account humans' bounded rationality and social norms, and, crucially, considers a wide variety of possible human goals.
We derive algorithms for computing that metric by backward induction or approximating it via a form of multi-agent reinforcement learning from a given world model. We exemplify the consequences of (softly) maximizing this metric in a variety of paradigmatic situations and describe what instrumental sub-goals it will likely imply. Our cautious assessment is that softly maximizing suitable aggregate metrics of human power might constitute a beneficial objective for agentic AI systems that is safer than direct utility-based objectives.
(Joint work with Ram Potham)