Israel Algorithmic Game Theory Seminar

The Israel Algorithmic Game Theory Seminar was a joint AGT seminar of all the major universities in Israel, which took place during the Covid-19 pandemic. The seminar was held virtually from October 2020 to May 2022.

To receive future announcements please join the mailing list here.

Past Talks

Recordings of the talks are available on the seminar's YouTube channel.

2021/2022 academic year

The seminar was organized by Moshe Babaioff (Microsoft Research), Michal Feldman (Tel Aviv University), and Sigal Oren (Ben-Gurion University).

Divyarthi Mohan (TAU) and Shiri Ron (WIS) were assisting in seminar scheduling and communication.

Date: May 31, 2022

Speaker: Éva Tardos (Cornell University)

Title: Stability and Learning in Strategic Queueing Systems [Recording]


Over the last two decades we have developed good understanding how to quantify the impact of strategic user behavior on outcomes in many games (including traffic routing and online auctions) and showed that the resulting bounds extend to repeated games assuming players use a form of no-regret learning to adapt to the environment. Unfortunately, these results do not apply when outcomes in one round effect the game in the future, as is the case in many applications. In this talk, we study this phenomenon in the context of a game modeling queuing systems: routers compete for servers, where packets that do not get served need to be resent, resulting in a system where the number of packets at each round depends on the success of the routers in the previous rounds. In joint work with Jason Gaitonde, we analyze the resulting highly dependent random process. We find that if the capacity of the servers is high enough to allow a centralized and knowledgeable scheduler to get all packets served even with double the packet arrival rate, then despite selfish behavior of the queues, the expected number of packets in the queues will remain bounded throughout time, assuming older packets have priority. Further, if queues are more patient in evaluating their outcomes, maximizing their long-run success rate, stability can be ensured with just 1.58 times extra capacity, strictly better than what is possible assuming the no-regret property.

Date: May 24, 2022

Speaker: Constantinos Daskalakis (Massachusetts Institute of Technology)

Title: How Hard is Equilibrium Learning in Multi-Agent RL? [Recording]


Multi-agent reinforcement learning lies at the heart of recent developments and future challenges in Artificial Intelligence, from playing Go and Starcraft to improving autonomous driving and evaluating the outcomes of economic policies. In contrast to single-agent RL, where a multitude of algorithms are available for computing near-optimal policies, in multi-agent RL Nash equilibria are intractable and the complexity of (coarse) correlated equilibrium learning is not well understood.

We show that computing approximate stationary Markov coarse correlated equilibria (CCE) in general-sum stochastic games is computationally intractable, even when there are two players, the game is turn-based, the discount factor is an absolute constant, and the approximation is an absolute constant. Our intractability results stand in sharp contrast to normal-form games where exact CCEs are efficiently computable and learnable via no-regret learning. A fortiori, our results imply that there are no efficient algorithms for learning stationary Markov CCE policies in multi-agent RL. In turn, these results stand in sharp contrast to single-agent RL where near-optimal stationary Markov policies can be efficiently learned.

Complementing our intractability results for stationary Markov CCEs, we provide a decentralized algorithm (assuming shared randomness among players) for learning a nonstationary Markov CCE policy with polynomial time and sample complexity in all problem parameters.

Joint work with Noah Golowich and Kaiqing Zhang.

Date: May 17, 2022

Speaker: Piotr Dworczak (Northwestern University)

Title: Redistributive Allocation Mechanisms


Many scarce public resources are allocated at below-market-clearing prices, and sometimes for free. Such “non-market” mechanisms necessarily sacrifice some surplus, yet they can potentially improve equity. In this paper, we develop a model of mechanism design with redistributive concerns. Agents are characterized by a privately observed social welfare weight and willingness to pay for quality, as well as a publicly observed label. A market designer controls allocation and pricing of a set of objects of heterogeneous quality, and maximizes the expectation of a welfare function defined by the social welfare weights. We derive structural insights about the form of the optimal mechanism, leading to a framework for determining how and when to use non-market mechanisms. The key determinant is the strength of the statistical correlation of the unobserved social welfare weights with the label and the willingness to pay that the designer can, respectively, directly observe or elicit through the mechanism.

This is joint work with Mohammad Akbarpour and Scott Kominers.

Link to the paper:

Date: May 10, 2022

Speaker: Ron Kupfer (Harvard University)

Title: Simplicity in Auctions Revisited: The Implementation Complexity [Recording]


The talk is based on joint work with Moshe Babaioff and Shahar Dobzinski.

In this work we revisit the notion of simplicity in mechanisms. We consider a monopolist who holds m heterogeneous items, facing a single buyer with a valuation v. We observe that previous attempts to define complexity measures often fail to classify mechanisms that are intuitively considered simple as such. We propose a novel simplicity measure: the implementation complexity. Roughly speaking, the implementation complexity of a menu is the number of demand and value queries needed to find the profit-maximizing bundle of any valuation v.

We show that for any distribution D over weighted matroid rank valuations, even distributions with arbitrary correlation among their values, there is always a bundle-size pricing menu with low implementation complexity that achieves almost the same revenue as the optimal bundle-size pricing menu. As part of this proof we provide a randomized algorithm that for any weighted matroid rank valuation v and integer k, finds the most valuable set of size k with only a poly-logarithmic number of demand and value queries. We show that this result is essentially tight in several aspects. For example, if the valuation v is submodular, then finding the most valuable set of size k requires exponentially many queries (this solves an open question of Badanidiyuru et al. [EC'12]). We also show that any deterministic algorithm that finds the most valuable set of size k requires Omega( m) demand and value queries, even for additive valuations.

Date: May 3, 2022

Speaker: Arpita Biswas (Harvard University)

Title: Two-Sided Fairness of Many-to-one Allocations [Recording]


Fair allocation of indivisible items has been receiving significant attention because of its applicability in various real-world settings. Several fairness notions, such as envyfreeness up to one good (EF1), maximin fair share guarantee (MMS), and Leximin fairness, have been popularly used for defining fairness among a set of agents. We extend this literature to scenarios where there are two groups of agents with each group having cardinal preferences over the agents of the other group. The goal is to find a many-to-one allocation that ensures fairness guarantees for both groups. Using the popular fairness definitions from the fair allocation literature, we study many-to-one fair allocations. We develop efficient algorithms for specific instances of the problem and investigate computational issues to obtain fairness guarantees for all agents across both groups.

Results covered in the talk:

  1. S. Narang, A. Biswas, and Y. Narahari. On Achieving Leximin Fairness and Stability in Many-to-One Matchings. To appear in Proceedings of the International Conference on Autonomous Agents and Multiagent Systems as an extended abstract (AAMAS 2022).

  2. A. Biswas, G.K. Patro, N. Ganguly, K. Gummadi, and A. Chakraborty. Towards Fair Recommendation in Two-Sided Platforms. ACM Transactions on the Web, Volume 16, Issue 2 May 2022, Article No.: 8, pp 1-34 (TWEB 2022).

Date: April 26, 2022

Speaker: Rebecca Reiffenhauser (Sapienza University of Rome)

Title: Allocating Indivisible Goods to Strategic Agents: Pure Nash Equilibria and Fairness [Recording]


When aiming to allocate goods to agents, both fairness and incentive compatibility are central goals. Unfortunately, each of those is often hard to achieve on its own, and aiming for both in the same (money-free) routine quickly leads to strong impossibility results. This work suggests a way around those, by abandoning the initial goal for a notion of the next-best thing: we show that the very natural round-robin algorithm for additive valuations actually produces EF1-fair, 'truthful equilibria' - even though agents might have misreported their values.

The talk discusses this main result as well as some additional results, and gives an outlook on future work.

Based on the following paper: [WINE 2021, best paper award]

Date: April 12, 2022

Speaker: Shuchi Chawla (University of Texas at Austin)

Title: Buy-Many Mechanisms for Many Buyers [Recording]


A recent line of research has established a novel desideratum for designing approximately-revenue-optimal multi-item mechanisms, namely the buy-many constraint. Under this constraint, prices for different allocations made by the mechanism must be subadditive implying that the price of a bundle cannot exceed the sum of prices of individual items it contains. This natural constraint has enabled several positive results in multi-item mechanism design bypassing well-established impossibility results.

Our work addresses a main open question from this literature involving the design of buy-many mechanisms for multiple buyers. Our main result is that a simple sequential item pricing mechanism with buyer-specific prices can achieve an $O(\log m)$ approximation to the revenue of any buy-many mechanism when all buyers have unit-demand preferences over $m$ items. This is the best possible as it directly matches the previous results for the single-buyer setting where no simple mechanism can obtain a better approximation. Our result applies in full generality: even though there are many alternative ways buy-many mechanisms can be defined for multi-buyer settings, our result captures all of them at the same time. We achieve this by directly competing with a more permissive upper-bound on the buy-many revenue, obtained via an ex-ante relaxation.

This is joint work with Rojin Rezvan (UT-Austin), Yifeng Teng (Google Research), and Christos Tzamos (UW-Madison).

Date: April 5, 2022

Speaker: Hu Fu (Shanghai University of Finance and Economics)

Title: Mechanism Design for an Intermediary Platform [Recording]


Feldman et al. [2010] proposed a model for modern online advertisement market with mediating agencies, where a Demand-Side Platforms (DSP) acts as an intermediary that collects direct bids from a number of advertisers and then participates on their behalf as a single bidder on a central Ad Exchange (ADX). Such an intermediary profits from the difference between payments in the auctions, one it hosts, and the other it participates. We study the mechanism design problem for each individual intermediary DSP and the ADX.

Bypassing complexities raised by Feldman et al., we characterize mechanisms and bidding strategies of a profit maximizing intermediary. We show that, regardless of the mechanism run by the ADX, a profit-maximizing DSP determines the contingent winner among its advertisers using the allocation rule of Myerson's classical revenue optimal auction, and bids in the ADX's auction as if having a value equal to the (ironed) virtual value of the contingently winning advertiser. The characterization immediately gives answers to mechanism design questions for the ADX, as well as markets with multiple layers of intermediaries.

As consequences of this characterization, we show that, in contrast to classical auctions, the revenue of the ADX platform may decrease with stochastically higher values of an advertiser or with an extra advertiser. Also, sequential posted pricing may not approximate the optimal profit for an intermediary within any constant factor. To restore a connection between welfare and profit maximizing mechanisms a la Bulow and Klemperer, we show that, if the ADX runs the second price auction, a DSP running a welfare maximizing mechanism can approximate the optimal profit of a DSP with market share alpha ∈[0,1] if the former has market share Omega(√alpha) . We also show that advertisers are better off staying on a smaller intermediary platform than joining a bigger one.

This is joint work with Nick Gravin, Pinyan Lu, and Zhihao Gavin Tang.

Date: March 29, 2022

Speaker: Ruta Mehta (University of Illinois at Urbana-Champaign)

Title: Allocating Goods, Bads, and Mixed: Fairness and Efficiency through Competitiveness [Recording]


Fair division is the problem of allocating a set of items among agents in a fair and efficient manner. This age-old problem, mentioned even in the Bible, arises naturally in a wide range of real-life settings, for example, school seat assignments, partnership dissolution, sharing of satellites, and dividing costs for climate resilience. Division based on competitive equilibrium (CE) has emerged as one of the best mechanisms for this problem. The existence and computability of CE have been extensively studied when all items are disposable goods, while the problem is less explored when some of them are nondisposable chores (bads). In this talk, I will discuss recent algorithmic advances on the computation of CE when each item may be a good, a chore, or both (mixed).

I will first consider the case of additive valuations, where when all items are goods, the CE set is well-known to be captured by convex programming formulations and thereby forms a convex set. In sharp contrast, with chores, the CE set may be nonconvex and disconnected. I will discuss how to handle this non-convexity through a novel exterior-point method to find an approximate CE in polynomial time (FPTAS). This method seems general enough to work with any mathematical formulation that optimizes a coordinate-wise monotone function over linear constraints. Finally, I will discuss extensions to general utility functions, and recent developments on the exchange setting (barter system).

Based on joint works with Shant Boodaghians, Bhaskar Ray Chaudhury, Jugal Garg, and Peter McGlaughlin.

Date: March 22, 2022

Speaker: Jason Hartline (Northwestern University)

Title: Optimization of Scoring Rules [Recording]


We introduce an objective for optimizing proper scoring rules. The objective is to maximize the increase in payoff of a forecaster who exerts a binary level of effort to refine a posterior belief from a prior belief. In this framework we characterize optimal scoring rules in simple settings, give efficient algorithms for computing optimal scoring rules in complex settings, and identify simple scoring rules that are approximately optimal. In comparison, standard scoring rules in theory and practice -- for example the quadratic rule, scoring rules for the expectation, and scoring rules for multiple tasks that are averages of single-task scoring rules -- can be very far from optimal.

Joint work with Yingkai Li, Liren Shan, Yifan Wu.

Link to the paper:

Date: March 15, 2022

Speaker: Ben Brooks (University of Chicago)

Title: On the Structure of Informationally Robust Optimal Auctions [Recording]


We study the design of profit-maximizing mechanisms in environments with interdependent values. A single unit of a good is for sale. There is a known joint distribution of the bidders' values for the good. Two programs are considered:

(i) Max (over mechanisms) min (over information structures and equilibria) profit;

(ii) Min (over information structures) max (over mechanisms and equilibria) profit.

We show that it is without loss to restrict attention to solutions of (i) and (ii) in which actions and signals belong to the same linearly ordered space, equilibrium actions are equal to signals, and the only binding equilibrium constraints are those associated with local deviations. Under such restrictions, the non-linear programs (i) and (ii) become linear programs that are dual to one another in an approximate sense. In particular, the restricted programs have the same optimal value, which we term the profit guarantee. These results simplify the task of computing and characterizing informationally robust optimal auctions and worst-case information structures with general value distributions. The framework can be generalized to include additional feasibility constraints, multiple goods, and ambiguous value distributions.

Joint work with Songzi Du.

Link to the paper:

Date: March 8, 2022

Speaker: Noam Nisan (Hebrew University)

Title: Auctions Between Regret-Minimizing Agents [Recording]


We analyze a scenario in which software agents implemented as regret-minimizing algorithms engage in a repeated auction on behalf of their users. We study first price and second price auctions, as well as their generalized versions (e.g., as those used for ad auctions). Using both theoretical analysis and simulations, we show that, surprisingly, in second price auctions the players have incentives to mis-report their true valuations to their own learning agents, while in the first price auction it is a dominant strategy for all players to truthfully report their valuations to their agents.

Joint work with Yoav Kolumbus.

See paper on arXiv:, as well as a related companion paper:

Date: January 11, 2022

Speaker: Hervé Moulin (University of Glasgow)

Title: Fair Division With Money and Prices [Recording]


The common practice of bypassing the indivisibility of objects with cash compensations has received little attention in the fair division literature, with the single exception of the assignment problem. We explore this approach in the standard model with quasi-linear utilities but with few or no restrictions on the utilities over subsets of objects, just like for combinatorial auctions.

We uncover first a severe trade-off, absent in classic Fair Division without cash transfers, between ex ante fairness measured by the utility an agent can guarantee in the worst case of adversarial agents, and ex post fairness as Envy Freeness: the latter implies that some agents with equal rights on the pile of goods are not guaranteed any positive utility. The simple Stand Alone Upper Bound is a alternative interpretation of ex post fairness.

Our main result is the construction of a division rule in which two agents (for instance) bid for the role of Seller or Buyer of the goods, and the Seller's price is constrained by her winning bid. In this mechanism:

  • individual messages are cognitively tractable and privacy preserving

  • individual guarantees are normatively appealing, much larger than with the familiar Multi Auction, or any mechanism compatible with Envy Freeness

  • safe play by the agents meets the Stand Alone UB if utilities are balanced or anti-balanced, and/or if objects are identical

  • full efficiency is ruled out by the low dimension of messages, even for additive utilities, but numerical examples suggest that safe play achieves a good approximation

Joint work with Anna Bogomolnaia.

Date: January 4, 2022

Speaker: Shahar Dobzinski (Weizmann Institute of Science)

Title: On the Hardness of Dominant Strategy Mechanism Design [Recording]


We study the communication complexity of dominant strategy implementations of combinatorial auctions. We start with two domains that are generally considered “easy”: multi-unit auctions with decreasing marginal values and combinatorial auctions with gross substitutes valuations. For both domains we have fast algorithms that find the welfare-maximizing allocation with communication complexity that is poly-logarithmic in the input size. This immediately implies that welfare maximization can be achieved in an ex-post equilibrium with no significant communication cost, by using VCG payments. In contrast, we show that in both domains the communication complexity of any dominant strategy implementation that achieves the optimal welfare is polynomial in the input size.

We then study the approximation ratios achievable by dominant strategy mechanisms. For combinatorial auctions with general valuations, we show that no dominant-strategy mechanism achieves an approximation ratio of m^{1−\eps}, where m is the number of items. In contrast, a randomized dominant strategy mechanism that achieves an O(\sqrt m) approximation. This proves the first gap between computationally efficient deterministic dominant strategy mechanisms and randomized ones.

Joint work with Shiri Ron and Jan Vondrak.

Date: December 28, 2021

Speaker: Yotam Gafni (Technion - Israel Institute of Technology)

Title: Long-Term Data Sharing Under Exclusivity Attacks [Recording]


The quality of learning generally improves with the scale and diversity of data. Companies and institutions can therefore benefit from building models over shared data. Many cloud and blockchain platforms, as well as government initiatives, are interested in providing this type of service.

These cooperative efforts face a challenge, which we call ``exclusivity attacks''. A firm can share distorted data, so that it learns the best model fit, but is also able to mislead others. We study protocols for long-term interactions and their vulnerability to these attacks, in particular for regression and clustering tasks. We conclude that the choice of protocol, as well as the number of Sybil identities an attacker may control, is material to vulnerability.

Join work with Moshe Tennenholtz (Technion)

Date: December 21, 2021

Speaker: Annie Liang (Northwestern University)

Title: Algorithmic Design: Fairness Versus Accuracy [Recording]


Algorithms are increasingly used to guide decisions in high-stakes contexts, such as who should receive bail, be approved for a loan, or be hired for a job. Motivated by a growing body of empirical evidence, regulators are concerned about the possibility that the error rates of these algorithms differ sharply across subgroups of the population. What are the tradeoffs between accuracy and fairness faced by algorithms, and how do they depend on the underlying statistical properties of the algorithm's inputs? To answer these questions, we propose a model in which a designer chooses an algorithm that maps observed inputs into decisions. We characterize the accuracy-fairness Pareto frontier (corresponding to different preferences for how to trade off accuracy and fairness), and identify simple statistical properties of the algorithm's inputs that determine the shape of this frontier. These general results allow us to derive characterizations in special cases where the inputs satisfy additional structure, for example when one of the inputs is the individual's group identity. We conclude by applying our results to address a recent policy question regarding whether and when to ban use of certain kinds of inputs by predictive algorithms.

Joint work with Jay Lu and Xiaosheng Mu.

Date: December 14, 2021

Speaker: Brendan Lucier (Microsoft Research)

Title: Automated Bidding: Efficiency and Regret in a Repeated Bidding Game [Recording]


Automated bidding agents are now standard in online advertising auctions. An advertiser specifies aggregate objectives and constraints, and a dynamic bidding algorithm tunes the bids of an ad campaign to meet those objectives. How should these auto-bidders be designed to guarantee good market outcomes when they bid against each other, while simultaneously optimizing on the individual advertiser's behalf? We show that an adaptive bid-pacing strategy obtains vanishing regret for an individual advertiser, while also guaranteeing approximate efficiency for the overall marketplace when used simultaneously. Our results apply to different types of underlying auctions (including first and second-price auctions) and do not rely on convergence of the auto-bidder strategies to equilibrium.

Joint work with Jason Gaitonde, Yingkai Li, Bar Light, and Alex Slivkins

Date: December 7, 2021

Speaker: Rann Smorodinsky (Technion - Israel Institute of Technology)

Title: Reaping the Informational Surplus in Bayesian Persuasion [Recording]


The Bayesian persuasion model studies communication between an informed sender and a receiver with a payoff-relevant action, emphasizing the ability of a sender to extract maximal surplus from his informational advantage. In this paper, we study a setting with multiple senders, in which the receiver is restricted to choosing, at the interim stage, one sender with whom to interact. Our main result is that whenever senders are uncertain about each other's preferences, and, in particular, cannot dismiss with certainty the possibility that others are aligned with the receiver, the receiver receives all the informational surplus in all equilibria.

Joint work with Niklas Hahn (Frankfurt), Martin Hoefer (Frankfurt), and Ronen Gradwohl (Ariel).

Date: November 23, 2021

Speaker: Divyarthi Mohan (Tel Aviv University)

Title: Simplicity and Optimality in Multi-Item Auctions [Recording]


Designing mechanisms to maximize revenue is a fundamental problem in mathematical economics and has various applications like online ad auctions and spectrum auctions. Unfortunately, optimal auctions for selling multiple items can be unreasonably complex and computationally intractable. In this talk, we consider a revenue-maximizing seller with n items facing a single unit-demand buyer. Our work shows that simple mechanisms can achieve almost optimal revenue. We approached the tradeoffs of simplicity formally through the lens of computation and menu size. Our main result provides a mechanism that gets a (1 − ε)-approximation to the optimal revenue in time quasi-polynomial in n and has quasi polynomial (symmetric) menu complexity.

I will also briefly discuss our work on welfare maximization in rich advertising auctions. Online ad auctions have evolved from allocating a few lines of text to richer formats that include images, sitelinks, etc. In our model, along with their bid per click, each advertiser reports a set of ad formats and only one of them can be shown. We provide a simple greedy allocation that is monotone (in both value and the set of formats) and guarantees a 1/3-approximation to the optimal welfare.

Date: November 16, 2021

Speaker: Yiling Chen (Harvard University)

Title: Faithful Lottery Mappings: from Games to Competitions


In many applications, we are blessed with mechanisms with monetary payments (scores) that induce a desirable game-theoretic equilibrium. For example, in information elicitation, carefully designed scoring functions lead to an equilibrium where everyone reports their private information truthfully. However, when such a mechanism is used for running a competition, which gives out one or more indivisible prizes to participants who receive top scores in the mechanism, the desirable equilibrium of the mechanism is generally no longer an equilibrium of the competition. In this talk, I’ll discuss how we design faithful lottery mappings that convert a large family of mechanisms with monetary payments to competitions with indivisible prizes while preserving the equilibrium structure of the original mechanisms in the competitions.

This talk is based on ongoing joint work with Fang-Yi Yu and Shuran Zheng.

Date: November 9, 2021

Speaker: Mark Braverman (Princeton University)

Title: Optimization-friendly generic mechanisms without money [Recording]


Our goal is to develop a generic framework for converting modern gradient-descent based optimization algorithms into mechanisms where inputs come from self-interested agents.

We focus on aggregating preferences from n players in a context without money. Special cases of this setting include voting, allocation of items by lottery, and matching. Our key technical contribution is a new meta-algorithm we call APEX (Adaptive Pricing Equalizing Externalities). The framework is sufficiently general to be combined with any optimization algorithm that is based on local search.

In the talk I'll outline the algorithm, and open problem/research directions that it raises, with a particular focus towards mechanism design + ML.

If time permits, I will discuss a special case of applying the framework to the problem of one-sided allocation with lotteries. In this case, we obtain a strengthening of the 1979 result by Hylland and Zeckhauser on allocation via a competitive equilibrium from equal incomes (CEEI). The [HZ79] result posits that there is a (fractional) allocation and a set of item prices such that the allocation is a competitive equilibrium given prices. We further show that there is always a reweighing of the players' utility values such that running the standard unit-demand VCG with reweighed utilities leads to a HZ-equilibrium prices. Interestingly, not all HZ competitive equilibria come from VCG prices.

Date: November 2, 2021

Speaker: Jacob Leshno (University of Chicago Booth Business School)

Title: Price Discovery in Waiting Lists: A Connection to Stochastic Gradient Descent [Recording]


Waiting lists allocate items by offering agents a choice among items with associated waiting times. These waiting times serve as prices that are determined endogenously and adjust according to the stochastic arrivals and departures of agents. We study the allocative efficiency under these dynamically adjusting prices by drawing a connection between this price adjustment process and the stochastic gradient descent optimization algorithm. We show that the loss due to price fluctuations is bounded by the granularity of price adjustments. Additional conditions allow us to identify markets where the loss is close to the bound or exponentially small. Our results show that a simple price adjustment heuristic can perform well, but may be slow to adjust to changes in demand because of a trade-off between the speed of adaptation and loss from price fluctuations.

Joint work with Itai Ashlagi, Pengyu Xian, and Amin Saberi.

Date: October 26, 2021

Speaker: Avi Cohen (Tel Aviv University)

Title: (Almost Full) EFX Exists for Four Agents (and Beyond) [Recording]


The existence of EFX allocations is a major open problem in fair division, even for additive valuations. The current state of the art is that no setting where EFX allocations are impossible is known, and yet, existence results are known only for very restricted settings, such as: (i) agents with identical valuations, (ii) 2 agents, and (iii) 3 agents with additive valuations. It is also known that EFX exists if one can leave n-1 items unallocated, where n is the number of agents.

We develop new techniques that allow us to push the boundaries of the enigmatic EFX problem beyond these known results, and (arguably) to simplify proofs of earlier results. Our main result is that every setting with 4 additive agents admits an EFX allocation that leaves at most a single item unallocated. Beyond our main result, we introduce a new class of valuations, termed nice cancelable, which includes additive, unit-demand, budget-additive and multiplicative valuations, among others. Using our new techniques, we show that both our results and previous results for additive valuations extend to nice cancelable valuations.

Date: October 19, 2021

Speaker: Manish Raghavan (Harvard University)

Title: Understanding Societal Impacts through Machine Learning and Mechanism Design: Automated Hiring as a Case Study [Recording]


The increased use of algorithmic techniques to make socially consequential decisions has prompted concerted efforts to understand the societal impacts of algorithmic decision-making. Much of this work draws primarily upon techniques and insights from machine learning; however, as algorithmic systems grow more complex, we need to incorporate ideas from game theory and mechanism design in order to fully understand their impacts.

In this talk, I'll examine how machine learning and mechanism design can come together to deepen our understanding of how algorithms impact society, using automated hiring as an example. After providing a brief overview of algorithmic hiring, I'll consider the issue of "algorithmic monoculture" in this context. This concern invokes analogies to agriculture, where a monocultural system runs the risk of severe harm from unexpected shocks. Here we show that the dangers of algorithmic monoculture run much deeper, in that monocultural convergence on a single algorithm by a group of decision-making agents, even when the algorithm is more accurate for any one agent in isolation, can reduce the overall quality of the decisions being made by the full collection of agents. Unexpected shocks are therefore not needed to expose the risks of monoculture; it can hurt accuracy even under "normal" operations, and even for algorithms that are more accurate when used by only a single decision-maker. Our results rely on minimal assumptions, and involve the development of a probabilistic framework for analyzing systems that use multiple noisy estimates of a set of alternatives.

Date: October 12, 2021

Speaker: Nicole Immorlica (Microsoft Research)

Title: Communicating with Anecdotes [Recording]


We study a model of social learning and communication using hard anecdotal evidence. There are two Bayesian agents (a sender and a receiver) who wish to communicate. The receiver must take an action whose payoff depends on their personal preferences and an unknown state of the world. The sender has access to a collection of samples correlated with the state of the world, which we think of as specific anecdotes or pieces of evidence, and can send exactly one of these samples to the receiver in order to influence her choice of action. Importantly, the sender's personal preferences may differ from the receiver's, which affects the sender's strategic choice of which anecdote to send. We show that if the sender has commitment power, then they will choose an unbiased and maximally informative communication scheme, no matter the difference in preferences. Without observability, however, even a small difference in preferences can lead to a significant bias in the choice of anecdote, which the receiver must then account for. This can significantly reduce the informativeness of the signal, leading to substantial utility loss for both sides. One implication is informational homophily: a receiver can rationally prefer to obtain information from a poorly-informed sender with aligned preferences, rather than a knowledgeable expert whose preferences may differ from her own.

Joint work with Nika Haghtalab, Brendan Lucier, Markus Mobius, and Divyarthi Mohan.

2020/2021 academic year

The seminar was organized by Moshe Babaioff (Microsoft Research) and Sigal Oren (Ben-Gurion University).

Ron Kupfer (HUJI) and Ben Berger (TAU) were assisting in seminar scheduling and communication.

The TAU seminar coordinator was Michal Feldman.

Date: June 29, 2021

Speaker: Robert Kleinberg (Cornell University)

Title: Threshold Tests as Quality Signals: Optimal Strategies, Equilibria, and Price of Anarchy


I will present and analyze a simple model of ranking and selection using pass/fail tests whose level of difficulty is selected endogenously. In the model, two firms compete to have their product chosen by a principal. The principal aims to choose the better of the two products, but the quality of a product can only be estimated via a coarse-grained "threshold test", which reveals whether the product's quality exceeds a specified threshold. We study this selection problem under two types of interactions. In the first, the principal does the testing herself and can choose tests optimally from a class of allowable tests. In the second interaction model, the two firms endogenously select the levels of difficulty of their tests. This corresponds to a setting in which the firms must commit to their testing (quality control) procedures before knowing the quality of their products. This model naturally gives rise to a signaling game with two senders and one receiver. We characterize the unique Bayes-Nash equilibrium of this game, which happens to be symmetric. We calculate its price of anarchy, and we finally show that if the principal is allowed to restrict both firms' set of available tests, the price of anarchy can be lowered but not eliminated.

This is joint work with Siddhartha Banerjee and David Kempe.

Date: June 22, 2021

Speaker: Ariel Procaccia (Harvard University)

Title: Democracy and the Pursuit of Randomness


Sortition is a storied paradigm of democracy built on the idea of choosing representatives through lotteries instead of elections. In recent years this idea has found renewed popularity in the form of citizens’ assemblies, which bring together randomly selected people from all walks of life to discuss key questions and deliver policy recommendations. A principled approach to sortition, however, must resolve the tension between two competing requirements: that the demographic composition of citizens’ assemblies reflect the general population and that every person be given a fair chance (literally) to participate. I will describe our work on designing, analyzing and implementing randomized participant selection algorithms that balance these two requirements. I will also discuss practical challenges in sortition based on experience with the adoption and deployment of our open-source system, Panelot.


Ariel Procaccia is Gordon McKay Professor of Computer Science at Harvard University. He works on a broad and dynamic set of problems related to AI, algorithms, economics, and society. His distinctions include the Social Choice and Welfare Prize (2020), a Guggenheim Fellowship (2018), the IJCAI Computers and Thought Award (2015), and a Sloan Research Fellowship (2015). To make his research accessible to the public, he has founded the not-for-profit website and he regularly contributes opinion pieces.

Date: June 15, 2021

Speaker: Aaron Roth (University of Pennsylvania)

Title: Online Multivalid Learning: Game Theory for Better Prediction [Recording]


We present a general, efficient technique for providing contextual predictions that are “multivalid” in various senses, against an online sequence of adversarially chosen examples (x, y). This means that the resulting estimates correctly predict various statistics of the labels y not just marginally — as averaged over the sequence of examples — but also conditionally on x \in G for any G belonging to an arbitrary intersecting collection of groups. We provide three instantiations of this framework. The first is mean prediction, which corresponds to an online algorithm satisfying the notion of multicalibration from Hebert-Johnson et al.. The second is variance and higher moment predictions, which corresponds to an online algorithm satisfying the notion of mean-conditioned moment multicalibration from Jung et al. Finally, we define a new notion of prediction interval multivalidity, and give an algorithm for finding prediction intervals which satisfy it. Because our algorithms handle adversarially chosen examples, they can equally well be used to predict statistics of the residuals of arbitrary point prediction methods, giving rise to very general techniques for quantifying the uncertainty of predictions of black-box algorithms, even in an online adversarial setting. When instantiated for prediction intervals, this solves a similar problem as conformal prediction, but in an adversarial environment and with multivalidity guarantees stronger than simple marginal coverage guarantees. Our techniques are fundamentally game-theoretic, extending an approach originally due to Fudenberg and Levine. Our algorithms result from equilibrium computations. This talk is based on a paper that is joint work with Varun Gupta, Christopher Jung, Georgy Noarov, and Mallesh Pai.

Paper link:

Bio: Aaron Roth is a professor of Computer and Information Sciences at the University of Pennsylvania, affiliated with the Warren Center for Network and Data Science, and co-director of the Networked and Social Systems Engineering (NETS) program. He is also an Amazon Scholar at Amazon AWS. He is the recipient of a Presidential Early Career Award for Scientists and Engineers (PECASE) awarded by President Obama in 2016, an Alfred P. Sloan Research Fellowship, an NSF CAREER award, and research awards from Yahoo, Amazon, and Google. His research focuses on the algorithmic foundations of data privacy, algorithmic fairness, game theory, learning theory, and machine learning. Together with Cynthia Dwork, he is the author of the book “The Algorithmic Foundations of Differential Privacy.” Together with Michael Kearns, he is the author of “The Ethical Algorithm”.

Date: June 8, 2021

Speaker: Inbal Talgam-Cohen (Technion - Israel Institute of Technology)

Title: Incomplete Information VCG Contracts for Common Agency [Recording]


We study contract design for welfare maximization in the well-known “common agency” model of Bernheim and Whinston [1986]. This model combines the challenges of coordinating multiple principals with the fundamental challenge of contract design: that principals have incomplete information of the agent’s choice of action. Motivated by the significant social inefficiency of standard contracts for such settings (which we formally quantify using a price of anarchy/stability analysis), we investigate whether and how a recent toolbox developed for the first set of challenges under a complete-information assumption – VCG contracts [Lavi and Shamash, 2019] – can be extended to incomplete information.

We define and characterize the class of “incomplete information VCG contracts (IIVCG)”, and show it is the unique class guaranteeing truthfulness of the principals and welfare maximization by the agent. Our results reveal an inherent tradeoff between two important properties required to ensure participation in the contract: individual rationality (for the principals) and limited liability (for the agent). We design a polynomial-time algorithm for determining whether a setting has an IIVCG contract with both properties. As our main result we design a polynomial-time “algorithmic IIVCG” contract: given valuation reports from the principals it returns, if possible for the setting, a payment scheme for the agent that constitutes an IIVCG contract with all desired properties. We also give a sufficient graph-theoretic condition on the population of principals that ensures the existence of such an IIVCG contract.

This is joint work with Tal Alon, Ron Lavi and Elisheva Shamash.

Date: June 1, 2021

Speaker: Moshe Babaioff (Microsoft Research)

Title: Best-of-Both-Worlds Fair-Share Allocations [Recording]


We consider the problem of fair allocation of indivisible items among n agents with additive valuations, when agents have equal entitlements to the goods, and there are no transfers. Best-of-Both-Worlds (BoBW) fairness mechanisms aim to give all agents both an ex-ante guarantee (such as getting the proportional share in expectation) and an ex-post guarantee.

Prior BoBW results have focused on ex-post guarantees that are based on the "up to one item" paradigm, such as envy-free up to one item (EF1). In this work we attempt to give every agent a high value ex-post, and specifically, a constant fraction of her maximin share (MMS). The up to one item paradigm fails to give such a guarantee, and it is not difficult to present examples in which previous BoBW mechanisms give some agent only a 1/n-fraction of her MMS.

Our main result is a deterministic polynomial-time algorithm that computes a distribution over allocations that is ex-ante proportional, and ex-post, every allocation gives every agent at least her proportional share up to one item, and more importantly, at least half of her MMS. Moreover, this last ex-post guarantee holds even with respect to a more demanding notion of a share, introduced in this paper, that we refer to as the truncated proportional share (TPS). Our guarantees are nearly best possible, in the sense that one cannot guarantee agents more than their proportional share ex-ante, and one cannot guarantee agents more than a n/(2n-1)-fraction of their TPS ex-post.

This is a joint work with Tomer Ezra and Uriel Feige.

For the full paper see

Date: May 25, 2021

Speaker: Vincent Conitzer (Duke University)

Title: Automated Mechanism Design for Strategic Classification


AI is increasingly making decisions, not only for us, but also about us -- from whether we are invited for an interview, to whether we are proposed as a match for someone looking for a date, to whether we are released on bail. Often, we have some control over the information that is available to the algorithm; we can self-report some information, and other information we can choose to withhold. This creates a potential circularity: the classifier used, mapping submitted information to outcomes, depends on the (training) data that people provide, but the (test) data depend on the classifier, because people will reveal their information strategically to obtain a more favorable outcome. This setting is not adversarial, but it is also not fully cooperative.

Mechanism design provides a framework for making good decisions based on strategically reported information, and it is commonly applied to the design of auctions and matching mechanisms. However, the setting above is unlike these common applications, because in it, preferences tend to be similar across agents, but agents are restricted in what they can report. This creates both new challenges and new opportunities. I will discuss both our theoretical work and our initial experiments.

(joint work with Hanrui Zhang, Andrew Kephart, Yu Cheng, Anilesh Krishnaswamy, Haoming Li, and David Rein.)

Bio: Vincent Conitzer is the Kimberly J. Jenkins Distinguished University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He is also Head of Technical AI Engagement at the Institute for Ethics in AI, and Professor of Computer Science and Philosophy, at the University of Oxford. He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University. Conitzer works on artificial intelligence (AI). Much of his work has focused on AI and game theory, for example designing algorithms for the optimal strategic placement of defensive resources. More recently, he has started to work on AI and ethics: how should we determine the objectives that AI systems pursue, when these objectives have complex effects on various stakeholders?

Conitzer has received the 2021 ACM/SIGAI Autonomous Agents Research Award, the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an NSF CAREER award, the inaugural Victor Lesser dissertation award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Guggenheim Fellow, a Sloan Fellow, a Kavli Fellow, a Bass Fellow, an ACM Fellow, a AAAI Fellow, and one of AI's Ten to Watch. He has served as program and/or general chair of the AAAI, AAMAS, AIES, COMSOC, and EC conferences. Conitzer and Preston McAfee were the founding Editors-in-Chief of the ACM Transactions on Economics and Computation (TEAC).

Date: May 11, 2021

Speakers: Liad Blumrosen (Hebrew University) and Eilon Solan (Tel Aviv University)

Title: Lessons from the Recent Israeli 5G Spectrum Auction


In August 2020, the Israeli Ministry of Communication (MoC) offered an unprecedented amount of spectrum for sale. Israeli telecommunication companies competed for purchasing licenses in the 700 MHz, 2.4 GHz and the 3.5GHz bands, which are the pillars of future 5G cellular networks. For the first time in Israel, the MoC used the Combinatorial Clock Auction – a two stage format that combines a dynamic-bidding phase, a supplementary bidding round, and the nearest-Vickrey core-selecting payment rule. This auction format has been designed and used during the last two decades in order to improve the efficiency of spectrum auctions, but it consists of notoriously complex rules.

In the talk we will survey the rules of the Israeli 5G auction, its outcome, and the difficulties that stemmed from the special structure and conditions of the Israeli cellular market. We will discuss the auction process from two different angles, together with some anecdotes from the media and from a legal debate that had taken place before the auction started.

Date: May 4, 2021

Speaker: Tomer Ezra (Tel-Aviv University)

Title: Settling the Communication Complexity of Combinatorial Auctions with Two Subadditive Buyers [Recording]


We study the communication complexity of welfare maximization in combinatorial auctions with m items and two players with subadditive valuations.

We show that outperforming the trivial 1/2-approximation requires exponential communication, settling an open problem of Dobzinski, Nisan and Schapira [STOC'05, MOR'10] and Feige [STOC'06, SICOMP '09].

To derive our results, we introduce a new class of subadditive functions that are “far from” fractionally subadditive (XOS) functions, and establish randomized communication lower bounds for a new “near-EQUALITY” problem, both of which may be of independent interest.

Joint work with Michal Feldman, Eric Neyman, Inbal Talgam-Cohen, S. Matthew Weinberg.

Date: April 27, 2021

Speaker: Alexander Teytelboym (University of Oxford)

Title: The Equilibrium Existence Duality: Equilibrium with Indivisibilities & Income Effects [Recording]


We show that, with indivisible goods, the existence of competitive equilibrium fundamentally depends on agents' substitution effects, not their income effects. Our Equilibrium Existence Duality allows us to transport results on the existence of competitive equilibrium from settings with transferable utility to settings with income effects. One consequence is that net substitutability---which is a strictly weaker condition than gross substitutability---is sufficient for the existence of competitive equilibrium. We also extend the ``demand types'' classification of valuations to settings with income effects and give necessary and sufficient conditions for a pattern of substitution effects to guarantee the existence of competitive equilibrium.

Joint work with Elizabeth Baldwin, Omer Edhan, Ravi Jagadeesan, Paul Klemperer

arXiv version

Date: April 20, 2021

Speaker: Paul Duetting (Google Research)

Title: An O(log log m) Prophet Inequality for Subadditive Combinatorial Auctions


Prophet inequalities compare the expected performance of an online algorithm for a stochastic optimization problem to the expected optimal solution in hindsight. They are a major alternative to classic worst-case competitive analysis, of particular importance in the design and analysis of simple (posted-price) incentive compatible mechanisms with provable approximation guarantees.

A central open problem in this area concerns subadditive combinatorial auctions. Here n agents with subadditive valuation functions compete for the assignment of m items. The goal is to find an allocation of the items that maximizes the total value of the assignment. The question is whether there exists a prophet inequality for this problem that significantly beats the best known approximation factor of O(log m).

We make major progress on this question by providing an O(log log m) prophet inequality. Our proof goes through a novel primal-dual approach. It is also constructive, resulting in an online policy that takes the form of static and anonymous item prices that can be computed in polynomial time given appropriate query access to the valuations. As an application of our approach, we construct a simple and incentive compatible mechanism based on posted prices that achieves an O(log log m) approximation to the optimal revenue for subadditive valuations under an item-independence assumption.

Joint work with Thomas Kesselheim (University of Bonn) and Brendan Lucier (Microsoft Research New England).

Link to the paper: arXiv, FOCS'21

Date: April 6, 2021

Speaker: Tami Tamir (The Interdisciplinary Center)

Title: Resource buying games with load-dependent costs [Recording]


In the basic setting of cost-sharing games, resources have fixed costs that are shared equally by their users.

The talk will consider a more general setting: resource buying games with load-dependent costs and arbitrary cost-sharing.

Under arbitrary cost-sharing, players do not only declare the resources they will use, they also declare and submit a payment per resource.

If the total payments on a resource cover its cost, the resource is activated, otherwise it remains unavailable to the players.

We will analyze the equilibrium existence, computation, and inefficiency in various classes of these games.

Based on joint work with Eirini Georgoulaki and Kostas Kollias

Date: March 16, 2021

Speaker: Yoav Kolumbus (The Hebrew University of Jerusalem)

Title: Optimal Collaterals in Multi-Enterprise Investment Networks [Recording]


We study a market of investments on networks, where each agent (vertex) can invest in any enterprise linked to her, and at the same time, raise capital for her firm's own enterprise from other agents she is linked to. Failing to raise sufficient capital results with the firm defaulting, being unable to invest in others. Our main objective is to examine the role of collaterals in handling the strategic risk that can propagate to a systemic risk throughout the network in a cascade of defaults. We take a mechanism design approach and solve for the optimal scheme of collateral contracts that capital raisers offer their investors. These contracts aim at sustaining the efficient level of investment as a unique Nash equilibrium, while minimizing the total collateral.

Our main results contrast the network environment with its non-network counterpart (where the sets of investors and capital raisers are disjoint). We show that for acyclic investment networks, the network environment does not necessitate any additional collaterals, and systemic risk can be fully handled by optimal bilateral collateral contracts between capital raisers and their investors. This is, unfortunately, not the case for cyclic investment networks. We show that bilateral contracting will not suffice to resolve systemic risk, and the market will need an external entity to design a global collateral scheme for all capital raisers. Furthermore, the minimum total collateral that will sustain the efficient level of investment as a unique equilibrium may be arbitrarily higher, even in simple cyclic investment networks, compared with its corresponding non-network environment. Additionally, we prove computational-complexity results, both for a single enterprise and for networks.

Joint work with Moshe Babaioff and Eyal Winter

arxiv link:

Date: March 9, 2021

Speaker: Michal Feldman (Tel-Aviv University)

Title: Prophet and Secretary Online Algorithms for Matching in General Graphs [Recording]


A common tension in market scenarios is choosing the right timing to commit to a decision. This tension is captured by the mathematical literature of optimal stopping theory, within two models that have become to be known as the secretary problem and the prophet inequality. In these models, a sequence of originally-unknown values arrive one by one. Upon arrival, the online algorithm observes the value and should decide whether or not to accept it. In secretary settings, the value sequence is arbitrary, but the values arrive in a uniformly random order. In prophet settings, every value is drawn from a known probability distribution, but the arrival order is arbitrary.

In this talk I will review the basic settings of secretary and prophet, as well as previous extensions to matching in bipartite graphs with 1-sided vertex arrival. I will then present our recent work, which studies online algorithms (in both secretary and prophet settings) for matching in *general* graphs. We provide tight competitive ratios for both secretary and prophet matching scenarios under vertex arrival. Under edge arrival, we provide competitive ratios that improve upon the state of the art.

Based on joint work with Tomer Ezra, Nick Gravin and Zhihao Tang.

Date: March 2, 2021

Speaker: Tim Roughgarden (Columbia University)

Title: Transaction Fee Mechanism Design for the Ethereum Blockchain: An Economic Analysis of EIP-1559 [Recording]


EIP-1559 is a proposal to make several tightly coupled changes to the Ethereum blockchain’s transaction fee mechanism, including the introduction of variable-size blocks and a burned base fee that rises and falls with demand. This proposal appears likely to be adopted and deployed in late 2021, in which case it will be the biggest economic change made to a major blockchain to date.

In this talk we formalize the problem of designing a transaction fee mechanism, taking into account the many idiosyncrasies of the blockchain setting (ranging from off-chain collusion between miners and users to the ease of money-burning). We then situate the specific mechanism proposed in EIP-1559 in this framework and rigorously interrogate its game-theoretic properties. We conclude by suggesting competing designs that offer alternative sets of trade-offs and highlighting related research opportunities for the AGT community.

Date: January 19, 2021

Speaker: Scott Kominers (Harvard Business School and Harvard University)

Title: To Infinity and Beyond: Scaling Economic Theories via Logical Compactness [Recording]


Many economic-theoretic models incorporate finiteness assumptions that, while introduced for simplicity, play a real role in the analysis. Such assumptions introduce a conceptual problem, as results that rely on finiteness are often implicitly nonrobust; for example, they may depend upon edge effects or artificial boundary conditions. Here, we present a unified method that enables us to remove finiteness assumptions, such as those on market sizes, time horizons, and datasets. We then apply our approach to a variety of matching, exchange economy, and revealed preference settings.

The key to our approach is Logical Compactness, a core result from Propositional Logic. Building on Logical Compactness, in a matching setting, we reprove large-market existence results implied by Fleiner's analysis, and (newly) prove both the strategy-proofness of the man-optimal stable mechanism in infinite markets and an infinite-market version of Nguyen and Vohra's existence result for near-feasible stable matchings with couples. In a trading-network setting, we prove that the Hatfield et al. result on existence of Walrasian equilibria extends to infinite markets. In a dynamic matching setting, we prove that Pereyra's existence result for dynamic two-sided matching markets extends to a doubly infinite time horizon. Finally, beyond existence and characterization of solutions, in a revealed-preference setting we reprove Reny's infinite-data version of Afriat's theorem and (newly) prove an infinite-data version of McFadden and Richter's characterization of rationalizable stochastic datasets.

Joint work with Yannai A. Gonczarowski and Ran I. Shorrer

Full paper:

Date: January 12, 2021

Speaker: Alon Eden (Harvard University)

Title: Combinatorial Auctions with Interdependent Values [Recording]


We study combinatorial auctions with interdependent valuations (IDV). In such settings, every agent has a private signal, and a public valuation function that depends on the private signals of all the agents. Interdependent valuations capture settings where agents lack information to determine their own valuations. Examples include auctions for artwork or oil drilling rights. For single item auctions and assuming the single-crossing condition, optimal welfare can be achieved. For combinatorial settings, Jehiel and Moldovanu showed a discouraging result -- unless the problem instance is degenerate, full efficiency cannot be achieved.

In this talk, I will present the first approximately optimal mechanisms for IDV settings with combinatorial preferences:

  • A VCG-like mechanism for general settings that achieves a constant factor approximation under submodularity condition over the signals (single crossing is not needed!).

  • A constant factor PoA of simultaneous auctions for unit demand bidders in large market settings. This result holds under bounded heterogeneity of valuations and limited asymmetry of information dispersion among agents.

Based on joint works with Michal Feldman, Amos Fiat, Kira Goldner, Anna Karlin, Inbal Talgam-Cohen and Ori Zviran.

Date: January 5, 2021

Speaker: Yakov Babichenko (Technion - Israel Institute of Technology)

Title: Settling the complexity of Nash equilibrium in congestion games [Recording]


We consider (i) the problem of finding a (possibly mixed) Nash equilibrium in congestion game, and (ii) the problem of finding an (exponential precision) fixed point of the gradient descent dynamics of a smooth function f ∶ [0, 1]^n → R. We prove that these problems are equivalent. Our result holds for various explicit descriptions of f, ranging from (almost general) arithmetic circuits, to degree-5 polynomials. By a very recent result of [Fearnley, Goldberg, Hollender, Savani 2020] this implies that these problems are PPAD ∩ PLS-complete. As a corollary, we also obtain the following equivalence of complexity classes:


arXiv link:

Date: December 29, 2020

Speaker: Ilan Nehama (Bar-Ilan University)

Title: Almost Quasi-Linear Utilities in Disguise: An Extension of Roberts' Theorem [Recording]


This work deals with the implementation of social choice rules using dominant strategies for unrestricted preferences. The seminal Gibbard-Satterthwaite theorem shows that only few unappealing social choice rules can be implemented unless we assume some restrictions on the preferences or allow monetary transfers. When monetary transfers are allowed and quasi-linear utilities w.r.t. money is assumed, Vickrey-Clarke-Groves (VCG) mechanisms were shown to implement any affine-maximizer, and by the work of Roberts, only affine-maximizers can be implemented whenever the type sets of the agents are rich enough.

In this work, we generalize these results and define a new class of preferences, the almost quasi-linear in disguise preference, which are the preferences that can be modeled by quasi-linear utilities over a sub-domain which is defined by a threshold. We show that the characterization of VCG mechanisms as the only incentive-compatible mechanisms extends naturally to this domain. We show that the original characterization of the VCG mechanism (as well as the recent work of Ma et al. [EC 2018] on parallel utility domains) are immediate corollaries of our generalized characterization. Our result follows from a simple reduction to the characterization of VCG mechanisms. Hence, we see our result more as a fuller more correct version of the VCG characterization than a new non-quasi-linear domain extension.

In particular, we show that these results extend naturally to the non-transferable utility domain. That is, that the incentive-compatibility of the VCG mechanisms does not rely on money being a common denominator but rather on the designer's ability to fine the agents on a continuous (maybe agent-specific) scale.

We think these two insights, considering the utility as a representation and not as the preference itself (which is common in the economic community) and considering utilities that represent the preference only for the relevant domain, would be fruitful in other domains as well.

An extended abstract of a preliminary version of this work appeared in: The 15th Conference on Web and Internet Economics (WINE 2019).

Contact info:

Date: December 22, 2020

Speaker: Sigal Oren (Ben-Gurion University)

Title: Mechanism Design with Moral Bidders [Recording]


A rapidly growing literature on lying in behavioral economics and psychology shows that individuals often do not lie even when lying maximizes their utility. In this work, we attempt to incorporate these findings into the theory of mechanism design. We consider players that have a preference for truth-telling, and will only lie if their benefit from lying is sufficiently larger than the loss of the others. To accommodate such players, we introduce α-moral mechanisms, in which the gain of a player from misreporting his true value, comparing to truth-telling, is at most α fraction of the loss that the others incur due to misreporting.

We develop a theory of moral mechanisms in the canonical setting of single-item auctions. We identify similarities and disparities to the standard theory of truthful mechanisms. In particular, we show that the auctioneer can take advantage of the players' preference for truth-telling when the values of the players are correlated, even when there are only two players. In contrast, we show that truthful mechanisms are optimal even among moral ones when the values of the players are independently drawn from certain identical distributions (e.g., the uniform and exponential distributions). A by-product of the latter proof is an alternative proof to Myerson’s optimal truthful mechanism characterization in the settings that we consider.

Joint work with Shahar Dobzinski

Date: December 15, 2020

Speaker: Giorgos Christodoulou (University of Liverpool)

Title: On the Nisan-Ronen conjecture [Recording]


The Nisan-Ronen conjecture states that no truthful mechanism for makespan-minimization when allocating m tasks to n unrelated machines can have approximation ratio less than n.

Over more than two decades since its formulation, little progress has been made in resolving it and the best known lower bound is still a small constant. In this talk we will discuss recent progress towards validating the conjecture.

Joint work with Elias Koutsoupias (U Oxford) and Annamaria Kovacs (Goethe U Frankfurt)

Link to the paper:

Date: December 8, 2020

Speaker: Shiri Ron (Weizmann Institute of Science)

Title: The Communication Complexity of Payment Computation [Recording]


Let (f,P) be an incentive compatible mechanism where f is the social choice function and P is the payment function. In many important settings, f uniquely determines P (up to a constant) and therefore a common approach is to focus on the design of f and neglect the role of the payment function.

Fadel and Segal [JET, 2009] question this approach by taking the lenses of communication complexity: can it be that the communication complexity of an incentive compatible mechanism that implements f (that is, computes both the output and the payments) is much larger than the communication complexity of computing the output? I.e., can it be that cc_{IC}(f)>>cc(f)?

Fadel and Segal show that for every f, cc_{IC}(f)<= exp(cc(f)). They also show that fully computing the incentive compatible mechanism is strictly harder than computing only the output: there exists a social choice function f such that cc_{IC}(f)=cc(f)+1. In a follow-up work, Babaioff, Blumrosen, Naor, and Schapira [EC'08] provide a social choice function f such that cc_{IC}(f)=n*cc(f), where n is the number of players. The question of whether the exponential upper bound of Fadel and Segal is tight remained wide open. We solve this question by explicitly providing an f such that cc_{IC}(f)= exp(cc(f)). In fact, we establish this via two very different proofs.

In contrast, we show that if the players are risk-neutral and we can compromise on a randomized truthful-in-expectation implementation (and not on deterministic ex-post implementation) gives that cc_{TIE}(f)=poly(n,cc(f)) for every function f, as long as the domain of f is single parameter or a convex multi-parameter domain. We also provide efficient algorithms for deterministic computation of payments in several important domains.

Joint work with Shahar Dobzinski

Full paper:

Date: December 1, 2020

Speaker: Kira Goldner (Columbia University)

Title: On Multi-Dimensional Gains from Trade Maximization [Recording]


We study gains from trade maximization in a two-sided market with n heterogeneous items, where each item is owned by a different seller i, and there is a constrained-additive buyer with feasibility constraint F. We present the first worst-case approximation guarantee for gains from trade in a multi-dimensional two-sided market.

Our first result provides an O(log(1/r))-approximation to the first-best gains from trade for a broad class of downward-closed feasibility constraints (such as matroid, matching, knapsack, or the intersection of these). Here r is the minimum probability over all items that a buyer's value for the item exceeds the seller's cost. Our second result removes the dependence on r and provides an unconditional O(log n)-approximation to the second-best gains from trade. We extend both results for a general constrained-additive buyer, losing another O(log n)-factor en-route.

Joint work with Yang Cai, Steven Ma, and Mingfei Zhao.

Full paper:

Date: November 24, 2020

Speaker: Shengwu Li (Harvard University)

Title: Investment Incentives in Near-Optimal Mechanisms [Recording]


In a Vickrey-Clarke-Groves mechanism, if one bidder can invest to raise his value, that investment is privately profitable if and only if it is socially optimal. However, it is sometimes infeasible to compute welfare-maximizing allocations, so real-world mechanisms often use heuristic allocation algorithms. We study investment incentives in strategy-proof mechanisms that use such algorithms. If an algorithm is near-efficient, does it remain near-efficient in the corresponding mechanism with investment opportunities? The answer is no: there exist algorithms that are near-efficient, but are arbitrarily far-from-efficient when investments are possible. Nevertheless, we prove that if a near-efficient algorithm ``excludes bossy negative externalities" then it remains near-efficient even with investments. A weakening of the condition is necessary and sufficient for the result.

Joint with Mohammad Akbarpour, Scott Duke Kominers, and Paul Milgrom.

Date: November 17, 2020

Speaker: Matt Weinberg (Princeton University)

Title: (a biased selection of) Recent Developments in Combinatorial Auctions [Recording]


In a combinatorial auction there are m items, and each of n players has a valuation function v_i which maps sets of items to non-negative reals. A designer wishes to partition the items into S_1,...,S_n to maximize the welfare (\sum_i v_i(S_i) ), perhaps assuming that all v_i lie in some class V (such as submodular, subadditive, etc.).

Within Algorithmic Game Theory, this problem serves as a lens through which to examine the interplay between computation and incentives. For example: is it the case that whenever a poly-time/poly-communication algorithm for honest players can achieve an approximation guarantee of c when all valuations lie in V, a poly-time/poly-communication truthful mechanism for strategic players can achieve an approximation guarantee of c when all valuations lie in V as well?

In this talk, I’ll give a brief history, then survey three recent results on this topic which:

- provide the first separation between achievable guarantees of poly-communication algorithms and poly-communication truthful mechanisms for any V (joint works with Mark Braverman and Jieming Mao, and with Sepehr Assadi, Hrishikesh Khandeparkar, and Raghuvansh Saxena).

- revisit existing separations between poly-time algorithms and poly-time truthful mechanisms via a new solution concept “Implementation in Advised Strategies” (joint work with Linda Cai and Clayton Thomas).

- resolve the communication complexity of combinatorial auctions for two subadditive players (joint work with Tomer Ezra, Michal Feldman, Eric Neyman, and Inbal Talgam-Cohen, time-permitting).

Date: November 10, 2020

Speaker: Uri Feige (Weizmann Institute of Science)

Title: Fair allocation of indivisible goods: the anyprice share [Recording]


We consider fair allocation of indivisible goods to agents with additive valuation functions, in a setting without money. The notions of fairness that we shall consider are variations on the well known maximin share (MMS). For the case when all agents have equal entitlement to the goods, we shall review known results concerning the fraction of the MMS that one can guarantee to the agents, and also offer some improvements on these results. We then introduce a new fairness notion, that we refer to as the anyprice share (APS). Unlike the maximin share, the anyprice share extends without change to settings in which agents have unequal entitlements. To analyse the fraction of the APS that one can guarantee to all agents, we consider a natural allocation game, and present strategies the guarantee an agent at least a 3/5 fraction of his APS, regardless of the strategies employed by the other agents.

Based on work with Moshe Babaioff, Tomer Ezra, Ariel Sapir, Laliv Tauber.

Date: November 3, 2020

Speaker: Aviv Zohar (The Hebrew University of Jerusalem)

Title: Pricing Cryptocurrency Mining Hardware [Recording]


Cryptocurrencies that are based on Proof-of-Work often rely on special purpose hardware (ASICs) to perform mining operations that secure the system.We argue that ASICs have been mispriced by miners and sellers that only consider their expected returns, and that in fact mining hardware should be treated as a bundle of financial options, that when exercised, convert electricity to virtual coins. We provide a method of pricing ASICs based on this insight, and compare the prices we derive to actual market prices. Contrary to the widespread belief that ASICs are worth less if the cryptocurrency is highly volatile, we show the opposite effect: volatility significantly increases value. Thus, if a coin’s volatility decreases, some miners may leave, affecting security. Finally we construct a portfolio of coins and bonds that provides returns imitating an ASIC, and evaluate its behavior: historically, realized revenues of such portfolios have significantly outperformed ASICs, showing that indeed there is a mispricing of hardware, and offering an alternative investment route for would-be miners.

Joint work with Aviv Yaish.

Date: October 27, 2020

Speaker: Yannai A. Gonczarowski (Microsoft Research)

Title: Bulow-Klemperer-Style Results for Welfare Maximization in Two-Sided Markets [Recording]


We consider the problem of welfare (and gains-from-trade) maximization in two-sided markets using simple mechanisms that are prior-independent. The seminal impossibility result of Myerson and Satterthwaite [1983] shows that even for bilateral trade, there is no feasible (individually rational, truthful, and budget balanced) mechanism that has welfare as high as the optimal-yet-infeasible VCG mechanism, which attains maximal welfare but runs a deficit. On the other hand, the optimal feasible mechanism needs to be carefully tailored to the Bayesian prior, and even worse, it is known to be extremely complex, eluding a precise description.

In this paper we present Bulow-Klemperer-style results to circumvent these hurdles in double-auction market settings. We suggest using the Buyer Trade Reduction (BTR) mechanism, a variant of McAfee's mechanism, which is feasible and simple (in particular, it is deterministic, truthful, prior-independent, and anonymous). First, in the setting in which the values of the buyers and of the sellers are sampled independently and identically from the same distribution, we show that for any such market of any size, BTR with one additional buyer whose value is sampled from the same distribution has expected welfare at least as high as the optimal-yet-infeasible VCG mechanism in the original market.

We then move to a more general setting in which the values of the buyers are sampled from one distribution, and those of the sellers from another, focusing on the case where the buyers' distribution first-order stochastically dominates the sellers' distribution. We present both upper bounds and lower bounds on the number of buyers that, when added, guarantees that BTR in the augmented market have welfare at least as high as the optimal in the original market. Our lower bounds extend to a large class of mechanisms, and all of our positive and negative results extend to adding sellers instead of buyers. In addition, we present positive results about the usefulness of pricing at a sample for welfare maximization (and more precisely, for gains-from-trade approximation) in two-sided markets under the above two settings, which to the best of our knowledge are the first sampling results in this context.

Joint work with Moshe Babaioff and Kira Goldner.


Yannai Gonczarowski is a postdoctoral researcher at Microsoft Research New England. His main research interests lie in the interface between the theory of computation, economic theory, and game theory—an area commonly referred to as Algorithmic Game Theory. In particular, Yannai is interested in various aspects of complexity in mechanism design (where mechanisms are defined broadly from auctions to matching markets), including the interface between mechanism design and machine learning. Yannai received his PhD from the Departments of Math and CS, and the Center for the Study of Rationality, at the Hebrew University of Jerusalem, where he was advised by Sergiu Hart and Noam Nisan, as an Adams Fellow of the Israel Academy of Sciences and Humanities. Throughout most of his PhD studies, he was also a long-term research intern at Microsoft Research in Herzliya. He holds an M.Sc. in Math (summa cum laude) and a B.Sc. in Math and CS (summa cum laude, Valedictorian) from the Hebrew University. Yannai is also a professionally-trained opera singer, having acquired a bachelor’s degree and a master’s degree in Classical Singing at the Jerusalem Academy of Music and Dance. Yannai's doctoral dissertation was recognized with several awards, including the 2018 Michael B. Maschler Prize of the Israeli Chapter of the Game Theory Society and the ACM SIGecom Doctoral Dissertation Award for 2018. Yannai is also the recipient of the inaugural ACM SIGecom Award for Best Presentation by a Student or Postdoctoral Researcher at EC’18, of the Best Paper Award at MATCH-UP’19, and of the inaugural INFORMS AMD Michael H. Rothkopf Junior Researcher Paper Prize (first place) for 2020. His first textbook, "Mathematical Logic through Python" (Gonczarowski and Nisan), which introduces a new approach to teaching the material of a basic Logic course to Computer Science students, tailored to the unique intuitions and strengths of this cohort of students, is forthcoming in Cambridge University Press.

Date: October 20, 2020

Speaker: Jon Kleinberg (Cornell University)

Title: Fairness and Bias in Algorithmic Decision-Making


Recent discussion in the public sphere about classification by algorithms has involved tension between competing notions of what it means for such a classification to be fair to different groups. We consider several of the key fairness conditions that lie at the heart of these debates, and discuss recent research establishing inherent trade-offs between these conditions. We also explore how the complexity of a classification rule interacts with its fairness properties, showing how natural ways of approximating a classifier via a simpler rule can act in conflict with fairness goals.

The talk will be based on joint work with Jens Ludwig, Sendhil Mullainathan, Manish Raghavan, and Cass Sunstein.

Short bio:

Jon Kleinberg is the Tisch University Professor in the Departments of Computer Science and Information Science at Cornell University. His research focuses on the interaction of algorithms and networks, the roles they play in large-scale social and information systems, and their broader societal implications. He is a member of the US National Academy of Sciences and National Academy of Engineering, and the recipient of MacArthur, Packard, Simons, Sloan, and Vannevar Bush research fellowships, as well awards including the Harvey Prize, the Nevanlinna Prize, and the ACM Prize in Computing.