Aggregation across disciplines: connections and frameworks

Presentation titles, abstracts, and slides

Jiehua Chen Computational aspects of multiwinner approval voting via p-norm Hamming distance vectors

We consider a family of multiwinner approval voting rules, which generalize the classical minisum and minimax procedures. Specifically, given a rational number p and approval ballots, the p-norm Hamming rule chooses a subset of co-winners which minimizes the p-norm of the vector of Hamming distances (i.e., the sizes of the symmetric differences) to the ballots. The minisum and minimax procedures are hence special cases and correspond to p = 1 and p = , respectively. It is well-known that determining a winner set under the minisum procedure (p = 1) can be done in polynomial time, while it becomes NP-hard under the minimax procedure (p = ∞). In this work, we show that winner determination remains NP-hard for every fixed rational p > 1, closing the gap for all rational values of p between 1 and infinity. We also provide an almost tight exponential-time algorithm and a simple factor-2 approximation algorithm for all fixed p > 1.

Joint work with Danny Hermelin and Manuel Sorge.

[Presentation slides]



Eric Danan Partial utilitarianism

Mongin (1994) proved a multi-profile version of Harsanyi's (1955) Aggregation Theorem: within the expected utility model, a social welfare functional mapping profiles of individual utility functions into social preference relations satisfies the Pareto and Independence of Irrelevant Alternatives principles if and only if it is utilitarian. The present paper extends Mongin's analysis by allowing individuals to have incomplete preferences, represented by sets of utility functions. An impossibility theorem is first established: social preferences cannot satisfy all the expected utility axioms, precluding utilitarian aggregation in this extended setting. Adapting the objective vs. subjective rationality approach of Gilboa et al. (2010) to the present social choice settings, representation theorems are then obtained by relaxing either the Completeness or the Independence axioms at the social level, yielding two forms of partial utilitarianism.

[Presentation slides]



Franz Dietrich (with Kai Spiekermann) Deliberation and the wisdom of crowds

Under the epistemic interpretation of voting, votes express judgments about what is correct, and the aggregation rule aims to generate correct outcomes, using the individual judgments as informational input. Is group deliberation epistemically beneficial, i.e., does it tend to change individual judgments in ways that improve the aggregate outcome? To tackle this notorious question, we construe deliberation as information sharing. That is, each voter bases their judgment on some personal set of evidence items, and during deliberation any voter shares some or all of their evidence with others, so that the evidence sets of voters increase and become more similar. We present a formal model that captures deliberation as information sharing, and use this model to highlight and simulate three voting failures and their potential reduction through deliberation. The first failure is the overrepresentation of widespread evidence: evidence items held by many voters are overrepresented in voting outcomes as compared to evidence items held by few or just one voter. The second failure is the neglect of evidence strength: because of ‘one man one vote', each voter has the same impact regardless of the strength of their evidence. The third failure is the neglect of informational complementarities across voters: knowledge that would follow by combining evidence of different voters fails to enter voting outcomes, as it does not enter any voter’s vote. We finally address the relationship to jury theorems.

Joint work in progress with Kai Spiekermann.

[Presentation slides]



Hayrullah Dindar Compromise in combinatorial domains

We consider collective choice problems where the set of social outcomes is a Cartesian product of finitely many finite sets. Each individual is assigned a two-level preference, defined as a pair involving a vector of strict rankings of elements in each of the sets and a strict ranking of social outcomes. A voting rule is called (resp. weakly) product stable at some two-level preference profile if every (resp. at least one) outcome formed by separate coordinate-wise choices is also an outcome of the rule applied to preferences over social outcomes. We investigate the (weak) product stability for the specific class of compromise solutions involving q-approval rules, where q lies between 1 and the number I of voters. Given a finite set X and a profile of I linear orders over X, a q-approval rule selects elements of X that gathers the largest support above q at the highest rank in the profile. Well-known q-approval rules are the Fallback Bargaining solution (q=I) and the Majoritarian Compromise (q=⌈(I/2)⌉). We assume that coordinate-wise rankings and rankings of social outcomes are related in a neutral way, and we investigate the existence of neutral two-level preference domains that ensure the weak product stability of q-approval rules. We show that no such domain exists unless either q=I or very special cases prevail. Moreover, we characterize the neutral two-level preference domains over which the Fallback Bargaining solution is weakly product stable.

Joint work with Jean Lainé.

[Presentation slides]



Edith Elkind United for change: deliberative coalition formation to change the status quo.

We study a setting in which a community wishes to identify a strongly supported proposal from a space of alternatives, in order to change the status quo. We describe a deliberation process in which agents dynamically form coalitions around proposals that they prefer over the status quo. We formulate conditions on the space of proposals and on the ways in which coalitions are formed that guarantee deliberation to succeed, that is, to terminate by identifying a proposal with the largest possible support. Our results provide theoretical foundations for the analysis of deliberative processes, in particular in systems for democratic deliberation support, such as, for instance, LiquidFeedback or Polis.

Based on joint work with Davide Grossi, Nimrod Talmon, Udi Shapiro, Abheek Ghosh and Paul Goldberg (AAAI'21/AAAI'22).

[Presentation slides]



Piotr Faliszewski Map of Elections: The Story So Far

Computational social choice has started off by focusing on theoretical studies, but more and more researchers include experimental analyses or pursue purely experimental research. In this talk we will study the problem of selecting data for such experiments regarding elections and voting. Specifically, we will present the idea of a "map of elections". The idea of this map is to gather as diverse a set of elections as possible and arrange them visually in some meaningful way. To this end, we define a distance between elections, generate elections from a number of different distributions, compute the distances between these elections and seek an embedding of the elections as point on a plane. We would like the distances between the points on the plane to be as similar to those between corresponding elections as possible. We will show that while some very natural distances between elections are NP-hard to compute, some simpler ones seem to lead to sufficiently informative maps. We will also show that our embeddings of elections on the plane are of fairly high quality, and we will discuss some theoretical properties of the map.

Joint work with: Niclas Boehmer, Robert Bredereck, Edith Elkind, Rolf Niedermeier, Arkadii Slinko, Krzysztof Sornat, Stanislaw Szufa, Nimrod Talmon, and Tomasz Was.

[Presentation slides]



Marc Fleurbaey The currency of aggregation

The "currency of justice" is the expression often used to talk about interpersonal comparisons and the definition of the relevant inequalities that should be eliminated as much as possible. In collective choice issues there is a similar issue of deciding what should be aggregated. In particular, under risk and incomplete information, there are debates about whether individuals' ex ante prospects are the right object of aggregation, taking their beliefs at the time of decision as given, or whether only a subset of these beliefs should be considered, or even not their beliefs but their underlying information and sources. I explore the guidance one can get on this issue from seeking maximal rationality from collective choice. And will also examine the idea that there are, possibly, different contexts of aggregation that may call for different ethical principles.

[Presentation slides]



Michel Grabisch Aggregation based on Choquet integral and capacities

Capacities are nonadditive measures and can be used to define nonlinear functionals, as the Choquet integral and the multilinear extension of Owen. Over a finite set of attributes N (or criteria, voters, etc.), capacities are able to represent the importance of a group of attributes. The Choquet integral of a function over N (e.g., representing values of attributes, scores on criteria, etc.) with respect to a capacity computes an aggregation of the values of the function taking into account the various importances of subsets of attributes. It encompasses the classical weighted arithmetic mean and the ordered weighted arithmetic mean, the latter including min, max, the median and all order statistics. A parent integral is the multilinear extension of Owen. Both can be seen as interpolation methods over the hypercube. Capacities can be analyzed through the interaction transform, which has immediate interpretation in terms of preferential dependence among the attributes. This sheds light on the interpretation of these nonlinear integrals and permits a practical identification/learning in applications.

[Presentation slides]



Jean Lainé Vote Swapping in multiwinner two-tiers elections

We investigate a specific type of group manipulation in two-tiers elections, where groups of voters manipulate by exchanging votes. Two-tiers elections are modelled as a two-stage choice procedure where in the first stage voters are distributed into districts, each being assigned one delegate. District preferences result from aggregating voters' preferences district-wise by means of some aggregation rule. Final outcomes are subsets of alternatives obtained at the second stage by applying a social choice function to the district preference profile. Combining an aggregation rule and a social choice function defines a constitution. Voters' preferences are linear orders over alternatives, which are extended to partial orders over sets by means of either the Kelly or the Fishburn or the Gardenförs extensions. A constitution is Kelly (resp. Fishburn, Gardenförs) swap-proof if no group of voters can get by exchanging their preferences a jointly preferred outcome according to the Kelly (resp. Fishburn, Gardenförs) extension. We establish sufficient conditions for swap-proofness w.r.t the three extensions. Special attention is paid to Condorcet (resp., positional) constitutions, where both the aggregation rule and the social choice function are based on simple majority voting (resp., a score vector). We characterize Kelly, Fishburn and Gardenförs swap-proof Condorcet constitutions, and show that no positional constitution is Kelly (hence Fishburn and Gardenförs) swap-proof.

Joint work with Hayrullah Dindar.

[Presentation slides]



Jérôme Lang Serial Dictatorship can be Fair

When allocating indivisible items to agents, under very mild conditions, the only strategyproof rule is serial dictatorship: given a fixed order over agents, at each step the designated agent chooses a number of items from those that are still available. This rule is often considered bad, because agents who come earlier in the sequence have a larger choice of items. However, when there are significantly more items than agents, the advantage towards agents who come earlier can be compensated by a higher number of items received by those who come later. How to balance priority in the sequence and number of items received is a nontrivial question. We use a model parameterized by a mapping from ranks to scores, a notion of social welfare, and a distribution over preference profiles; for several meaningful choices of parameters, we show that the optimal sequence can be computed in polynomial time. We show that scoring vectors can be elicited by a simple procedure, and report on an experiment.

Joint work with Guillaume Méroué, Sylvain Bouveret and Hugo Gilbert.

[Presentation slides]



Rida Laraki Level-strategyproof Belief Aggregation Mechanisms

In the problem of aggregating experts’ probabilistic predictions over an ordered set of outcomes, we introduce the axiom of level-strategyproofness (level-SP) and prove it to be a natural and robust as it implies incentive compatibility in a rich domain of single-peakedness over the space of cumulative distribution functions (CDFs). This contrasts with the literature which assumes single-peaked preferences over the space of probability distributions. Our main results are: (1) a reduction of our problem to the aggregation of CDFs; (2) the axiomatic characterization of level-SP probability aggregation functions with and without the addition of other axioms; (3) impossibility results which provide bounds for our characterization; (4) the axiomatic characterization of two new and practical level-SP methods: the proportional-cumulative method and the middlemost-cumulative method; and (5) the application of proportional-cumulative to extend approval voting, majority rule, and majority judgment methods to situations where voters/experts are uncertain about how to grade the candidates/alternatives to be ranked.

Joint work with Estelle Varloot (University of Liverpool).

[Presentation slides]



Jean-François Laslier Political preference profiles around the world and the mechanical effects of voting rules.

Based on CSES data (Comparative Study of Electoral Systems) that gather opinion surveys from many countries, we study 205 elections for which the questionnaires include the rating of political parties by the respondents on a 0-10 scale. Taking these data at face value as representing political preference profiles and considering parties as "candidates", we simulate a number of single winner voting rules. We are thereby able to show to what extent these rules differ on "real and sincere" political situations. We find that most voting rules (two round majority voting, instant runoff, approval voting, best average and best median schemes) usually coincide, with plurality voting producing the more exotic results. We also try to relate the differences in outcomes to features of the election such as affective polarization (as seen from the ratings) or ideological polarization of the party structure (for which we have expert data).

Joint work with Romain Lachat (Sciences Po, Paris).

[Presentation slides]



Antonin Macé A Model of Repeated Collective Decisions.

A committee makes repeated decisions at a qualified majority rule, under complete information on current preferences. We show that the optimal equilibrium of this repeated game coincides on path with a cutoff decision rule: at each stage, an efficient proposal is collectively accepted if and only if the pivotal individual’s utility exceeds some fixed negative cutoff. In contrast with the stage-game equilibrium, where individuals vote sincerely, the optimal equilibrium involves a form of implicit logroll, individuals sometimes voting against their preference to achieve the efficient decision. As a result, both the expected level of utility and the degree of consensus may be significantly higher than at the stage-game equilibrium. We characterize the set of optimal voting rules in the limit case where the number of individuals grows large and show that it always contains either the simple majority rule or the unanimity rule. We then introduce Abreu et al. (1993)’s renegotiation-proofness requirement, and characterize the optimal renegotiation-proof equilibrium. In the limit case, simple majority is always optimal but the set of optimal voting rules expands as the discount factor increases, eventually including all majority thresholds if individuals are patient enough. The model provides a rationale for the use of unanimity rule, while explaining the prevalence of consensus in committees using a lower majority threshold.

Joint with Rafael Treibich.

[Presentation slides]



Andrew Mackenzie Menu mechanisms

We investigate menu mechanisms: dynamic mechanisms where at each history, an agent selects from a menu of his possible assignments. We consider both ex-post implementation and full implementation, for both subgame perfection and a strengthening of dominance that covers off-path histories, and provide conditions under which menu mechanisms provide these implementations of rules. Our results cover a variety of environments, including elections, marriage, college admissions, auctions, labor markets, matching with contracts, and object allocation.

Joint work with Yu Zhou.

[Presentation slides]



Thierry Marchant Voting theory, decision theory and multi-criteria decision analysis

Many axiomatic results concerning aggregation procedures in multi-criteria decision analysis have been obtained in the framework of voting theory or in that of conjoint measurement. We show that both frameworks, although helpful for a better understanding of some aggregation procedures, are not totally appropriate for multi-criteria decision analysis. We propose a new framework that is more relevant for the axiomatization of aggregation procedures in the context of multi-criteria decision analysis. We present some axiomatic results obtained in this framework, showing its interest.

[Presentation slides]



Vincent Merlin The Probability of Disputable Outcomes under Direct and Indirect Elections

Defenders of the Electoral College routinely invoke a traditional argument to reject proposals for a national popular vote. Granted, they say, Florida in 2000 was a national nightmare, but the agony would be far greater if such a dispute extended over the entire nation. Proponents of a direct vote reply by conjecturing that the Electoral College increases the frequency of disputable elections. Indeed, the 2020 US presidential election was another instance of disputable election on the legal front. We investigate whether we should expect disputable outcomes to be more frequent under the present US indirect system as compared with a direct vote and, if so, by how much. We use two methods: an historical analysis of actual outcomes in presidential elections, and an a priori formal model borrowed from Social Choice Theory (IAC). Depending on the thresholds one posits for disputability, the historical analysis shows that disputable elections have been about two to six times more frequent under the Electoral College. In a model where all the states have the same population the IAC model produces an impressively compatible intermediate ratio of 4:1. We also explore the impact of differences in the population of the states on the likelihood of legal disputes via computer simulations.

Joint work with Jack Nagel and Théo Duchemin.

[Presentation slides]



Stefan Napel Influence and manipulation in weighted committee voting

Committee decisions on three or more alternatives depend on the adopted voting method and the underlying distribution of votes (e.g. voting weights, party seats, individual shareholdings). Based on structural analysis of how voting weights affect the aggregation of individual preferences into a collective decision and generalizations of the Penrose-Banzhaf and Shapley-Shubik indices from binary votes to general social choice, we study the distribution of voting power and who benefits from adoption of a particular voting rule (e.g. a plurality vote vs. plurality voting with a runoff vs. pairwise comparisons). We also investigate the interaction of voting weights, voting methods and the manipulability of committee decisions.

Joint work with Sascha Kurz and Alexander Mayer.

[Presentation slides]



Arianna Novaro (replacing Ulle Endriss) Representation-Faithful Aggregation

When modelling a decision-making scenario in which several agents need to aggregate their preferences, one of the most fundamental choices we face is how to represent the alternatives available to them. For example, when the matter at stake is the time slot for a meeting, then we might represent the available alternatives either in terms of the beginning and the end of the meeting, or in terms of its beginning and its duration. This choice might greatly affect the range of aggregation rules available to us. Yet, in the literature on social choice theory the crucial issue of choosing an adequate form of representation is rarely given much thought. In this talk, I will introduce the concept of "representation-faithfulness" of an aggregation rule, and I will illustrate the richness of this new concept with impossibility and characterisation results in the domain of interval aggregation.

This is joint work with Zoi Terzopoulou.

[Presentation slides]


Gabriella Pigozzi Spaces of argumentation and their interaction. Some elements of thought inspired by controversies and dispute in France during the Covid-19 crisis

During the Covid-19 pandemic, many public policy decisions had to be taken. These decisions were taking place in an unusual context and using a very novel list of policy actions – such as lockdown, curfew or school closures. Because of their novelty, deciders needed to justify such decisions. As a consequence, we witnessed a rapid construction and circulation of arguments in public spaces. One particularity in the Covid-19 debate is that most arguments were labelled as “scientific”: scientists were counselling governments around the globe and became a regular presence on the media.

In this work, we focus on the interactions between scientists and the media. We are interested in moments when scientific arguments are debated in and judged by worlds other than science, where the confrontation, the approval of arguments and the reasoning do not follow the same standards as in science.


Joint work in progress with Juliette Rouchier and Dov Gabbay.

[Presentation slides]



Xiangyu Qu Deliberate Social Opinions and Consistently Utilitarian Aggregation

We suggest a novel method to aggregate individual preferences under uncertainty. Individuals agree to be guided by subjective expected utility, but may differ in both beliefs and values. We assume that social preference admits a subjective expected utility restricted to the acts whose induced probabilities are consensus among individuals. We do not impose a concrete functional form for the rest of acts and require a society to evaluate those through consensus acts. Restricted Pareto condition along with various conditions on social preferences are shown to be equivalent to consistently utilitarian Choquet expected utility, α-maxmin expected utility and so on.

Joint work with Antoine Billot.

[Presentation slides]



Agnieszka Rusinowska The design of public debate in social networks

We propose a model of the joint evolution of opinions and social relationships in a setting where social influence decays over time. The dynamics are based on bounded confidence: social connections between individuals with distant opinions are severed while new connections are formed between individuals with similar opinions. Our model naturally gives rise to strong diversity, i.e. the persistence of heterogeneous opinions in connected societies, a phenomenon that most existing models fail to capture. The intensity of social interactions is the key parameter that governs the dynamics. First, it determines the asymptotic distribution of opinions. In particular, increasing the intensity of social interactions brings society closer to consensus. Second, it determines the risk of polarization, which is shown to increase with the intensity of social interactions. Our results allow to frame the problem of the design of public debates in a formal setting. We hence characterise the optimal strategy for a social planner who controls the intensity of the public debate and thus faces a trade-off between the pursuit of social consensus and the risk of polarization. We also consider applications to political campaigning and show that both minority and majority candidates can have incentives to lead society towards polarization.

Joint work with Michel Grabisch and Antoine Mandel.

[Presentation slides]



Piotr Skowron Proportional Participatory Budgeting with Additive Utilities

We study voting rules for participatory budgeting, where a group of voters collectively decides which projects should be funded using a common budget. We allow the projects to have arbitrary costs, and the voters to have arbitrary additive valuations over the projects. We formulate two axioms that guarantee proportional representation to groups of voters with common interests. To the best of our knowledge, all known rules for participatory budgeting do not satisfy either of the two axioms; in addition we show that the most prominent proportional rule for committee elections, Proportional Approval Voting, cannot be adapted to arbitrary costs nor to additive valuations so that it would satisfy our axioms of proportionality. We construct a simple and attractive voting rule that satisfies one of our axioms (for arbitrary costs and arbitrary additive valuations), and that can be evaluated in polynomial time. We prove that our other stronger axiom is also satisfiable, though by a computationally more expensive and less natural voting rule.

[Presentation slides]



Zoi Terzopoulou Voting rules for approvals and rankings: Are they (in)compatible?

Voting rules that rely on approval ballots are often contrasted with those that are based on preference rankings. We further explore the compatibility between these two kinds of methods, by comparing the winners of two alternative elections: One where voters cast approval ballots and the winners are decided using approval voting, and one where voters report preference rankings and some classical voting rule determines the winners. Assuming that the reported rankings are consistent with the approval ballots, what can we say about the alternative winners of the two elections? If the employed voting rule satisfies a notion of positive approval compatibility, then every approval winner could be a ranking winner, and a negative version of this notion would imply that every approval loser could be a ranking loser. Although negative compatibility is a very weak notion, we find that positive compatibility divides usual voting rules into two classes: On the one hand, several positional scoring rules (but except the Borda rule and plurality) violate it, while the Borda rule, plurality, and Condorcet-consistent rules satisfy it.

This is joint work with Jérôme Lang and William Zwicker.

[Presentation slides]



Stéphane Zuber Foundations of utilitarianism under risk and variable population

Utilitarianism is the most prominent social welfare function in economics. We present three new axiomatic characterizations of utilitarian (that is, additively separable) social welfare functions in a setting where there is risk over both population size and the welfares of individuals. First, we show that, given uncontroversial basic axioms, Blackorby et al.’s (1998) Expected Critical-Level Generalized Utilitarianism (ECLGU) is equivalent to a new axiom holding that it is better to allocate higher utility-conditional-on-existence to possible people who have a higher probability of existence. The other two novel characterizations extend classic axiomatizations of utilitarianism from settings with either social risk or variable-population, considered alone. By considering both social risk and variable population together, we clarify the fundamental normative considerations underlying utilitarian policy evaluation.

Joint work with Dean Spears.

[Presentation slides]