2021 NBER Decentralization:

Mechanism Design for Vulnerable Populations


Modules A and F: Theory 1 and 2

(all times in EST)

Registration Link

Module_A_F.pdf

12:00 - 12:40


Keynote Talk: Sample Complexity of Mechanism Design

Presented by: Tuomas Sandholm, Carnegie Mellon University

Typically in mechanism design it is assumed that the designer has a prior over the agents’ preferences. This is often unrealistic, especially in multi-dimensional settings. For example, in a combinatorial auction, the prior for just a single bidder would have a number of support points that is doubly exponential in the number of items. To address this, Sandholm and Likhodedov introduced the idea of mechanism design with just samples from the prior. This raises the question, how many samples are sufficient to ensure that a mechanism that is empirically optimal on the samples is nearly optimal in expectation on the full, unknown prior? I will discuss structure shared by a myriad of pricing, auction, and lottery mechanisms that allows us to prove strong sample complexity bounds: for any set of agents’ preferences, the objective is a piecewise linear function of the mechanism's parameters. We prove new bounds for mechanism classes not previously studied in the sample-based mechanism design literature and match or improve over the best known guarantees for many classes. The functions we study are significantly different from well-understood functions in machine learning, so our analysis requires a sharp understanding of the interplay between mechanism parameters and agents’ preferences. We strengthen our main results with data-dependent bounds when the distribution over agents’ preferences is well-behaved. We investigate a fundamental tradeoff in sample-based mechanism design: complex mechanisms often have higher objective value than simple mechanisms, but more samples are required to ensure that empirical and expected objective are close. We provide techniques for optimizing this tradeoff. I will then present an even more general recent sample complexity theory, which we have used also for voting and redistribution mechanism design. I will then discuss how similar techniques can be used to estimate the approximate incentive compatibility of non-incentive-compatible mechanisms. Finally, I will discuss new work on learning within an instance, applied to combinatorial auctions and to revenue preservation in a shrinking multi-unit market.

12:40 - 13:20


Inverse selection

Presented by: Rohit Lamba, Penn State University

Co-Author(s): Markus Brunnermeier, Princeton University; Carlos Segura-Rodriguez, Banco Central de Costa Rica

Big data, machine learning and AI inverts adverse selection problems. It allows insurers to infer statistical information and thereby reverses information advantage from the insuree to the insurer. In a setting with two-dimensional type space whose correlation can be inferred with big data we derive three results: Frist, a novel tradeoff between belief gap and price discrimination emerges. The number of contracts offered is small. Second, we show that forcing the insurance company to reveal its statistical information can be welfare reducing. Third, we show in a setting with naïve agents that do not perfectly infer statistical information from price of offered contracts, price discrimination significantly boosts insurers’ profits.

13:20 - 14:00


Fair Prediction with Endogenous Behavior

Presented by: ChangHwa Lee, University of Pennsylvania

Co-Author(s): Christopher Jung, University of Pennsylvania; Sampath Kannan, University of Pennsylvania; Mallesh M. Pai, Rice University; Aaron Roth, University of Pennsylvania; Rakesh Vohra, University of Pennsylvania

There is increasing regulatory interest in whether machine learning algorithms deployed in consequential domains (e.g. in criminal justice) treat different demographic groups “fairly.” However, there are several proposed notions of fairness, typically mutually incompatible. Using criminal justice as an example, we study a model in which society chooses an incarceration rule. Agents of different demographic groups differ in their outside options (e.g. opportunity for legal employment) and decide whether to commit crimes. We show that equalizing type I and type II errors across groups is consistent with the goal of minimizing the overall crime rate; other popular notions of fairness are not.

14:00 - 14:20

Module A Discussion

12:00 - 12:10

Module F Introduction: Chris Shannon, University of California, Berkeley

12:10 - 12:50


Informationally Simple Implementation

Presented by: Agathe Pernoud, Stanford University

Co-Author(s): Simon Gleyze, Paris School of Economics

A designer wants to implement a social objective that depends on an unobserved state of the world. Unlike the standard approach, agents do not have ex-ante private information but can acquire costly information on their preferences. The choice of the mechanism generates informational incentives as it affects the information that players choose to acquire before play begins. A mechanism is informationally simple if agents have no incentive to learn about others’ preferences. This endogenizes an “independent private value” property of the interim information structure. We show that a mechanism is informationally simple if and only if it is dictatorial. This holds for generic environments and any smooth cost function which satisfies an Inada condition. Hence even strategy-proof mechanisms incentivize agents to learn about others and require to hold beliefs about opponents’ play. We then show that lack of informational simplicity may induce a new type of rent—even when the cost of information is vanishing—due to incomplete information aggregation. Informationally Simple implementation which could restore strategic-simplicity and avoid this rent is limited. Finally we show that even though agents’ information is endogenously correlated full surplus extraction à la Crémer-McLean is impossible, and we investigate how sequential mechanisms can sometimes restore informational simplicity.

12:50 - 13:30

Learning with Limited Memory: Bayesianism vs Heuristics

Presented by: Tai-Wei Hu, University of Bristol

Co-Author(s): Kalyan Chatterjee, Penn State University

We study the classical sequential hypothesis testing problem (Wald, 1950), but add the memory constraint modelled by finite automata. Generically, the constrained optimum by Bayes rule is impossible to implement with any finite-state automata. We then introduce stochastic finite-state automata with memory constraint and study the constrained optimal rule. Two classes of information structure are considered: the model of breakthroughs in which one signal fully reveals the state of nature but not others, and the more symmetric model where two signals are of similar strengths. In the first, randomization is strictly optimal whenever the memory constraint is binding and the optimum requires some learning. In the second, randomization is not optimal but the optimal finite-automaton uses qualitative probabilities.

13:30 - 14:10


Adapting the Groves-Ledyard Mechanism to Reduce Failures of Individual Rationality

Presented by: PJ Healy, Ohio State University

Co-Author(s): Renkun Yang, Ohio State University


14:10 - 14:40

Module F Discussion