Workshop on the Distortion and Information-Efficiency Tradeoffs

Confirmed Speakers

Georgios Amanatidis (University of Essex)

Peeking Behind the Ordinal Curtain: Improving Distortion via Cardinal Queries

The notion of distortion was introduced to quantify the inefficiency of using only ordinal information when trying to maximize the social welfare. Since then, this research area has flourished and the distortion has been quantified for many fundamental settings. However, the vast majority of the existing literature is focused on the case where nothing is known beyond the ordinal preferences of the agents over the alternatives. In this paper, we take a more expressive approach, and consider mechanisms that are allowed to further ask a few cardinal queries in order to gain partial access to the underlying values that the agents have for the alternatives. With this extra power, we design new deterministic mechanisms that achieve significantly improved distortion bounds and outperform the best-known randomized ordinal mechanisms. We draw an almost complete picture of the number of queries required to achieve specific distortion bounds.

Elliot Anshelevich (Rensselaer Polytechnic Institute)

Bounding Distortion under Metric Preferences

Distortion measures the inefficiency of using only ordinal information when attempting to select an optimal outcome, instead of the unknown numerical information. For example, in social choice applications, agents may have numerical costs for each of the possible alternatives, but the mechanism must decide on a good alternative based only on the ordinal preferences of the agents which are induced by the underlying costs. In this talk, I will address bounding distortion under metric preferences, i.e., under the assumption that the agent costs form a metric space and thus obey the triangle inequality. I will first discuss basic approaches to bounding distortion of social choice mechanisms in this setting. I will then talk about bounding distortion in matching and other related graph problems, in which it is also reasonable to assume that agents cannot express the numerical values of their utility for different outcomes, but are still able to rank the outcomes in their order of preference. I will finally discuss some open problems if time allows.

Gerdus Benade (Boston University)

Preference elicitation for participatory budgeting

Participatory budgeting enables the allocation of public funds by collecting and aggregating individual preferences. Making the most of this new paradigm requires a rethinking of some of the basics of computational social choice, including the very way in which individuals express their preferences. We analytically compare four preference elicitation methods --- knapsack votes, rankings by value or value for money, and threshold approval votes --- through the lens of implicit utilitarian voting, and find that threshold approval votes are qualitatively superior. This conclusion is supported by experiments using data from real participatory budgeting elections.

Umang Bhaskar (Tata Institute of Fundamental Research)

Bounds on the Welfare of Truthful Voting Mechanisms

We consider the impact of imposing truthfulness on the efficiency of preference aggregation mechanisms, from the utilitarian perspective. Our measure of efficiency is the distortion of the utilitarian social welfare. We consider both ordinal (where voters submit a preference ranking of candidates) and cardinal (where voters submit cardinal scores for the candidates) mechanisms, and present nearly matching upper and lower bounds on the distortion.

Yu Cheng (University of Illinois)

Of the People: Voting with Representative Candidates

We study voting rules when candidates and voters are embedded in a common metric space. We consider the case when candidates are representative of the population, in the sense that they are drawn i.i.d. from the population of the voters, and analyze the expected distortion of different voting rules.

Our first result is that representative candidates improve the social welfare of a population. We restrict attention to the most fundamental social choice setting: two candidates and a majority rule election. It is known that the candidate selected by the majority rule can be thrice as far from the population as the socially optimal one. We study how much this ratio improves when the candidates are representative, and how does the answer depend on the complexity of the underlying metric space.

For voting with multiple candidates, we give a clean and tight characterization of positional voting rules that have constant expected distortion. Our characterization result immediately implies constant expected distortion for Borda Count. On the other hand, we obtain super-constant expected distortion for Plurality and Veto. These results contrast with previous results on voting with metric preferences: When the candidates are chosen adversarially, all of the preceding voting rules have distortion linear in the number of candidates or voters.

Aris Filos-Ratsikas (University of Liverpool)

An Introduction to the Distortion and Information-Efficiency Tradeoffs

In this introductory talk, I will provide a brief overview of the main notions associated with the distortion of ordinal mechanisms, and the related literature. I will introduce the basic results for the case of unrestricted normalized utilities, I will introduce the setting of metric distortion, and I will explain how the results on the tradeoffs between information and efficiency fit into this context.

Vasilis Gkatzelis (Drexel University)

Resolving the Optimal Metric Distortion Conjecture

We study the following metric distortion problem: there are two finite sets of points, V and C, that lie in the same metric space, and our goal is to choose a point in C whose total distance from the points in V is as small as possible. However, rather than having access to the underlying distance metric, we only know, for each point in V, a ranking of its distances to the points in C. We propose algorithms that choose a point in C using only these rankings as input and we provide bounds on their distortion (worst-case approximation ratio). A prominent motivation for this problem comes from voting theory, where V represents a set of voters, C represents a set of candidates, and the rankings correspond to ordinal preferences of the voters. A major conjecture in this framework is that the optimal deterministic algorithm has distortion 3. We resolve this conjecture by providing a polynomial-time algorithm that achieves distortion 3, matching a known lower bound. We do so by proving a novel lemma about matching rankings of candidates to candidates, which we refer to as the ranking-matching lemma. This lemma induces a family of novel algorithms, which may be of independent interest, and we show that a special algorithm in this family achieves distortion 3. We also provide more refined, parameterized, bounds using the notion of {\alpha}-decisiveness, which quantifies the extent to which a voter may prefer her top choice relative to all others. Finally, we introduce a new randomized algorithm with improved distortion compared to known results, and also provide improved lower bounds on the distortion of all deterministic and randomized algorithms.

Kamesh Munagala (Duke University)

Social Choice over Large Decision Spaces

For many problems in modern social choice, the decision space may be so big that it may not be feasible to enumerate the outcomes in this space, rendering most ordinal voting schemes impractical. For example, when voting over possible budgets in a municipality, the number of alternatives is all combinations of projects fitting within the budget; or when deciding transportation policy of a city, the set of all policies may itself be a priori unknown. In such settings, how can we design decentralized, simple, and scalable social choice mechanisms, and how do we analyze them? Ideally, we seek mechanisms that only require participation from a few randomly chosen voters, each of who proposes and/or ranks at most a constant number of alternatives.

In this talk, we will show that Metric Distortion, initially proposed by Anshelevich et al., provides a nice analytic framework for such mechanisms, showing a path forward to achieving fine-grained properties and distinctions. We propose and analyze three novel mechanisms: Sequential pairwise deliberation, Random referee, and Randomized Copeland, and show that each of these mechanisms has minimal complexity for the analytic properties they seek to achieve.

Nisarg Shah (University of Toronto)

On the Communication-Distortion Tradeoff in Voting

In the standard model of voting, voters express ranked preferences over alternatives and the voting rule aggregates them into a collective decision. The implicit utilitarian approach to voting posits that these expressed ranked preferences stem from underlying cardinal preferences and proposes to minimize the distortion (i.e. the worst-case approximation of social welfare subject to the available information). But then why ask for rankings? What if we could ask the voters to report some other information about their cardinal preferences? How much can we minimize the distortion if we are willing to ask for k bits of information from each voter? In this talk, I will provide an overview of recent results which chart the Pareto-frontier of the tradeoff between communication complexity and distortion in voting. This is based on joint papers with Debmalya Mandal, Ariel D. Procaccia, and David Woodruff published at NeurIPS'19 and EC'20.

Alexandros Voudouris (University of Essex)

The Distortion of Distributed Voting and Facility Location

We will discuss the distortion of distributed mechanisms for two prominent examples in social choice theory, voting and facility location. In our unified distributed setting, there is a set of agents who are partitioned into a set of districts and the aggregation of their preferences over alternative outcomes (either candidates in case of voting, or points on the real line in case of facility location) is done in two steps: the preferences of the agents in each district are first aggregated into a representative alternative for the district, and then one of the representatives is chosen as the outcome. In contrast to centralized settings, where inefficiency is the product of limited information due to the ordinal representation of the preferences, in our case, the main reason for inefficiency is the indirect access to the preferences of the agents.