Programme

Thursday


09:00—09:30       Arrival, registration, and coffee

09:30—10:15       Stéphane Gonzalez

10:15—10:45       Coffee break

10:45—11:30       Rebecca Reiffenhäuser

11:30—12:00       Coffee break

12:00—12:45       Ulrike Schmidt-Kraepelin 

12:45—14:30       Lunch break

14:30—15:15       Dolors Berga

15:15—16:45       Poster session with coffee

16:45—17:30       Iannis Caragiannis


Friday


09:00—09:30       Arrival and coffee

09:30—10:15       Michele Gori

10:15—10:45       Coffee break

10:45—11:30       Anke Gerber

11:30—12:00       Coffee break

12:00—12:45       Umberto Grandi 

12:45—14:30       Lunch break

14:30—15:15       Marc Vorsatz

15:15—15:45       Coffee break 

15:45—16:30       Edith Elkind

Abstracts of Invited Talks

Dolors Berga (University of Girona)

Condorcet Consistency and Pairwise Justifiability under Variable Agendas 

We use a principle, named pairwise justifiability, to justify a change in a collective decision when either the preferences of the individuals or the available alternatives vary. We compare rules satisfying this principle with those that satisfy the Condorcet principle. We show that, despite the different logic underlying these two requirements, they are equivalent when imposed on anonymous and neutral rules defined over appropriate domains. 

This talk is based on joint work with Salvador Barberà, Bernardo Moreno, and Antonio Nicolò.

Iannis Caragiannis (Aarhus University)

New Fairness Concepts for Allocating Indivisible Items [slides] 

For the fundamental problem of fairly dividing a set of indivisible items among agents, envy-freeness up to any item (EFX) and maximin fairness (MMS) are arguably the most compelling fairness concepts proposed till now. Unfortunately, despite significant efforts over the past few years, whether EFX allocations always exist is still an enigmatic open problem, let alone their efficient computation. Furthermore, today we know that MMS allocations are not always guaranteed to exist. These facts weaken the usefulness of both EFX and MMS, albeit their appealing conceptual characteristics. We propose two alternative fairness conceptscalled epistemic EFX (EEFX) and minimum EFX value fairness (MXS)—inspired by EFX and MMS. For both, we explore their relationships to well-studied fairness notions and, more importantly, prove that EEFX and MXS allocations always exist and can be computed efficiently for additive valuations. Our results justify that the new fairness concepts are excellent alternatives to EFX and MMS.

This talk is based on joint work with Jugal Garg, Nidhi Rathi, Eklavya Sharma, and Giovanna Varricchio. 

Edith Elkind (University of Oxford and Alan Turing Institute)

Welfare and Fairness in Temporal Slot Assignment [slides]

We consider the problem of assigning projects to time slots, in the setting where agents have approval preferences over projects that vary over time, and each project can be implemented at most once. While optimising the utilitarian welfare in this setting is easy, optimising egalitarian welfare turns out to be computationally hard, even under strong constraints on agents' preferences. However, we describe a randomised algorithm for this problem that runs in polynomial time if the number of agents is bounded by a constant; our proof uses algebraic techniques and extends to other concepts of welfare and fairness.

This talk is based on joint work with Nicholas Teh and Sonja Kraiczy (SAGT-2022).

Anke Gerber (University of Hamburg)

On the Endogenous Order of Play in Sequential Games [slides]

We formalize, under the name of games of addition, the strategic interaction between agents that can play non-simultaneously by adding payoff relevant actions to those that any other players or themselves have already taken previously, but may also agree unanimously to stop adding them and collect the payoffs associated with the truncated sequence of moves. Our formalization differs from that of extensive form games in that the order of the agents’ moves is not predetermined but emerges endogenously when applying an adapted version of a solution concept proposed by Dutta, Jackson and Le Breton (2004). We provide results regarding the properties of solutions to games of addition, and we also compare their corresponding equilibria with those we would obtain if using extensive form games and subgame perfection as alternative tools of analysis. We provide evidence that, in general, establishing a priori the order of play by agents may lead to subgame equilibrium outcome predictions that can be dramatically different than those obtained using games of addition and the endogenously predicted order of play associated with our equilibrium notion. This inclines us to expect that games of addition may be useful in the analysis of the interactions between players whose actions cannot be naturally constrained to respect a fixed order of moves. 

This talk is based on joint work with Salvador Barberà.

Stéphane Gonzalez (University of Saint-Etienne)

Axiomatic Characterization of the Knapsack and Greedy Budgeting Rules [slides]  

We investigate the axiomatic properties of two popular budget allocation rules over discrete goods, the knapsack and the greedy rules. Goods are described by a price and a utility value. The knapsack rule selects the affordable bundle that maximizes the sum of the utility values attached to its items, while the greedy one sequentially selects the project with the highest utility until the budget is exhausted. We show that, despite their differences, these rules belong to the same family of budget allocation rules, which we characterized axiomatically.

This talk is based on joint work with Federica Ceron and Adriana Navarro Ramos.

Michele Gori (University of Florence)

Symmetric Mechanisms for Two Sided Matching Problems [slides]

We focus on the basic two-sided matching model, where there are two disjoint sets of agents of equal size, and each agent in a set has preferences - modelled by a linear order - on the agents in the other set. The ultimate goal is to find a matching that associates each agent in one set with an agent in the other set based on the agents' preferences. A mechanism is a rule that associates a set of matchings to each preference profile. Stability, which refers to the capability to select only stable matchings, is undoubtedly the most important property a mechanism should fulfill. Another crucial property, especially useful for applications, is resoluteness, which requires that the mechanism always selects a unique matching. The man-optimal and woman-optimal deferred acceptance algorithms are examples of stable and resolute mechanisms. However, those mechanisms are severely unfair since they strongly favor one of the two sides of the market. In this paper, we introduce a general framework to describe the symmetry of mechanisms, essentially identifying the level of symmetry of a mechanism with its level of fairness within and across the two sets of agents. We prove two main results, both involving the most general notion of symmetry known as “gender fairness”: there exists a resolute and gender fair mechanism if and only if each side of the market consists of an odd number of agents; there exists no resolute, stable and gender fair mechanism. 

This talk is based on joint work with Daniela Bubboloni and Claudia Meo. 

Umberto Grandi (University of Toulouse)

How Divisive is an Alternative in a Profile of Rankings? [slides]

There is a plethora of measures in social choice theory to decide how much an alternative is liked by voters who expressed their preferences in the form of rankings. Few papers attempted at measuring other parameters, such as the cohesiveness or the diversity of the preferences collected. In this talk I will propose a measure of divisiveness of an alternative in a profile of ranking, and report on recent empirical findings that explore the relation between the proposed measure of divisiveness and classical methods of ranking alternatives. I will also present results exploring the algorithmic properties of divisiveness and its resistance to control and manipulation.

This talk is based on recent work presented at IJCAI-2023 and in Nature Human Behaviour.

Rebecca Reiffenhäuser (University of Amsterdam)


Fairness in Non-Truthful Algorithms with Strategic Agents [slides]

When trying to fairly allocate a set of items to a group of strategic agents, at least in the absence of payments, truthful mechanisms are often unattainable. However, there does exist a range of algorithms to guarantee, for example, an EF1-allocation (envy-free up to one item) when agents are assumed to report their true preferences. This talk introduces a set of results showing that even when agents misreport their preferences to improve the value of their assigned sets, some standard, non-truthful algorithms retain strong fairness properties—with respect to the true, underlying preferences. 

Ulrike Schmidt-Kraepelin (Eindhoven University of Technology)


Monotone Randomized Apportionment [slides]

Apportionment is the act of distributing the seats of a legislature among political parties (or states) in proportion to their vote shares (or populations). A famous impossibility by Balinski and Young (2001) shows that no apportionment method can be proportional up to one seat (quota) while also responding monotonically to changes in the votes (population monotonicity). Grimmett (2004) proposed to overcome this impossibility by randomizing the apportionment, which can achieve quota as well as perfect proportionality and monotonicity — at least in terms of the expected number of seats awarded to each party. Still, the correlations between the seats awarded to different parties may exhibit bizarre non-monotonicities. When parties or voters care about joint events, such as whether a coalition of parties reaches a majority, these non-monotonicities can cause paradoxes, including incentives for strategic voting. We propose monotonicity axioms ruling out these paradoxes, and study which of them can be satisfied jointly with Grimmett’s axioms. Essentially, we require that, if a set of parties all receive more votes, the probability of those parties jointly receiving more seats should increase. Our work draws on a rich literature on unequal probability sampling in statistics (studied as dependent randomized rounding in computer science). Our main result shows that a sampling scheme due to Sampford (1967) satisfies Grimmett’s axioms and a notion of higher-order correlation monotonicity.  


This talk is based on joint work with  José Correa, Paul Gölz, Jamie Tucker-Foltz, and Victor Verdugo.  

Marc Vorsatz (National University of Distance Education)

The Structure of Strategy-Proof Rules [slides]

We establish that all strategy-proof social choice rules in strict preference domains follow necessarily a two-step procedure. In the first step, agents are asked to reveal some specific information about their preferences. Afterwards, a subrule that is dictatorial or strategy-proof of range 2 must be applied, and the selected subrule may differ depending on the answers of the first step. As a consequence, the strategy-proof rules that have been identified in the literature for some domains can be reinterpreted in terms of our procedure and, more importantly, this procedure serves as a guide for determining the structure of the strategy-proof rules in domains that have not been explored yet.  

This talk is based on joint work with Jorge Alcalde-Unzu.

Posters