2021 NBER Decentralization:

Mechanism Design for Vulnerable Populations


Conference Program


Download PDF program Registration Link Conference Preview (video)

Registration Closed

Day 1 - Thursday April 15 (all times in EST)

11:50 - 12:00

Introduction and Welcome: Sera Linardi and Scott Page

Organizer: Robizon Khubulashvili, University of Pittsburgh

This module focuses on big data, machine learning, and fairness in AI with motivating examples from criminal justice and the insurance industry.

12:00 - 12:40


Keynote Talk: Sample Complexity of Mechanism Design

Presented by: Tuomas Sandholm, Carnegie Mellon University

Typically in mechanism design it is assumed that the designer has a prior over the agents’ preferences. This is often unrealistic, especially in multi-dimensional settings. For example, in a combinatorial auction, the prior for just a single bidder would have a number of support points that is doubly exponential in the number of items. To address this, Sandholm and Likhodedov introduced the idea of mechanism design with just samples from the prior. This raises the question, how many samples are sufficient to ensure that a mechanism that is empirically optimal on the samples is nearly optimal in expectation on the full, unknown prior? I will discuss structure shared by a myriad of pricing, auction, and lottery mechanisms that allows us to prove strong sample complexity bounds: for any set of agents’ preferences, the objective is a piecewise linear function of the mechanism's parameters. We prove new bounds for mechanism classes not previously studied in the sample-based mechanism design literature and match or improve over the best known guarantees for many classes. The functions we study are significantly different from well-understood functions in machine learning, so our analysis requires a sharp understanding of the interplay between mechanism parameters and agents’ preferences. We strengthen our main results with data-dependent bounds when the distribution over agents’ preferences is well-behaved. We investigate a fundamental tradeoff in sample-based mechanism design: complex mechanisms often have higher objective value than simple mechanisms, but more samples are required to ensure that empirical and expected objective are close. We provide techniques for optimizing this tradeoff. I will then present an even more general recent sample complexity theory, which we have used also for voting and redistribution mechanism design. I will then discuss how similar techniques can be used to estimate the approximate incentive compatibility of non-incentive-compatible mechanisms. Finally, I will discuss new work on learning within an instance, applied to combinatorial auctions and to revenue preservation in a shrinking multi-unit market.

12:40 - 13:20


Inverse selection

Presented by: Rohit Lamba, Penn State University

Co-Author(s): Markus Brunnermeier, Princeton University; Carlos Segura-Rodriguez, Banco Central de Costa Rica

Big data, machine learning and AI inverts adverse selection problems. It allows insurers to infer statistical information and thereby reverses information advantage from the insuree to the insurer. In a setting with two-dimensional type space whose correlation can be inferred with big data we derive three results: Frist, a novel tradeoff between belief gap and price discrimination emerges. The number of contracts offered is small. Second, we show that forcing the insurance company to reveal its statistical information can be welfare reducing. Third, we show in a setting with naïve agents that do not perfectly infer statistical information from price of offered contracts, price discrimination significantly boosts insurers’ profits.

13:20 - 14:00


Fair Prediction with Endogenous Behavior

Presented by: Changhwa Lee, University of Pennsylvania

Co-Author(s): Christopher Jung, University of Pennsylvania; Sampath Kannan, University of Pennsylvania; Mallesh M. Pai, Rice University; Aaron Roth, University of Pennsylvania; Rakesh Vohra, University of Pennsylvania

There is increasing regulatory interest in whether machine learning algorithms deployed in consequential domains (e.g. in criminal justice) treat different demographic groups “fairly.” However, there are several proposed notions of fairness, typically mutually incompatible. Using criminal justice as an example, we study a model in which society chooses an incarceration rule. Agents of different demographic groups differ in their outside options (e.g. opportunity for legal employment) and decide whether to commit crimes. We show that equalizing type I and type II errors across groups is consistent with the goal of minimizing the overall crime rate; other popular notions of fairness are not.

14:00 - 14:20

Module A discussion with the presenters, Alex Albright (Harvard University), and Bo Cowgill (Columbia Business School).

Organizer: Sera Linardi, University of Pittsburgh

Practical issues in service provisions - assigning the vulnerable to scarce resources, designing incentives to volunteer, eat healthy, or engage with service providers - are unique opportunities to bridge research and practice in mechanism design. These applied research talks will be followed by practitioner-academic conversations.

14:30 - 14:40

Introduction: Dean Carissa Slotterback, GSPIA, University of Pittsburgh

14:40 - 15:15

Motivating Experts to Contribute to Public Goods: A Personalized Field Experiment on Wikipedia

Presented by: Yan Chen, University of Michigan

Co-Author(s): Rosta Farzan, University of Pittsburgh; Robert Kraut, Carnegie Mellon University; Iman YeckehZaare, University of Michigan; Ark Fangzhou Zhang, Uber Intelligent Decision System

We use a large-scale personalized field experiment on Wikipedia to examine the effect of motivation on the contributions of domain experts to public goods. Our baseline positive response rate is 45%. Furthermore, experts are 13% more interested in contributing when we mention the private benefit of contribution, such as the likely citation of their work. In the contribution stage, using a machine learning model, we find that greater matching accuracy between a recommended Wikipedia article and an expert’s paper abstract, together with an expert’s reputation and the mentioning of public acknowledgement, are the most important predictors of both contribution length and quality. Our results show the potential of scalable personalized interventions using recommender systems to study drivers of prosocial behavior.

15:15 - 15:50

Who Gets Placed Where and Why? An Empirical Framework for Foster Care Placement

Presented by: Alejandro Robinson-Cortes, University of Exeter Business School

This paper presents an empirical framework to study the assignment of children into foster homes and its implications on placement outcomes. The empirical application uses a novel dataset of confidential foster care records from Los Angeles County, CA. The estimates of the empirical model are used to examine policy interventions aimed at improving placement outcomes. In general, it is observed that market thickness tends to improve expected placement outcomes. If placements were assigned across all the administrative regions of the county, the model predicts that (i) the average number of foster homes children go through before exiting foster care would decrease by 8% and (ii) the distance between foster homes and children’s schools would be reduced by 54%.

15:50 - 16:25


Behavioral Food Subsidies

Presented by: Andy Brownback, University of Arkansas

Co-Author(s): Alex Imas, Carnegie Mellon University; Michael A. Kuhn, University of Oregon

We examine the potential of healthy food subsidies for reducing nutritional inequality through demand-side interventions. Using a pre-registered field experiment with low-income grocery shoppers, we show that low-cost, scalable behavioral interventions make subsidies substantially more effective. Our unique design allows us to elicit choices and deliver subsidies both before and during a shopping trip. We examine two novel interventions: giving shoppers greater agency through a choice between subsidies and introducing waiting periods designed to prompt deliberation about food purchases. The interventions increase healthy purchases by 61% relative to choiceless healthy subsidies, and 199% relative to a control group.

16:25 - 17:00

Service Utilization Among the Previously Incarcerated

Presented by: Sera Linardi, University of Pittsburgh

Co-Author(s): Marco Castillo, Texas A&M University; Ragan Petrie, Texas A&M University

Of the 650,000 individuals released annually from prisons, two-thirds are re-arrested within three years. Many struggle to fulfill basic needs or associate with new peers. Programs that offer "aftercare services", a comprehensive menu of supportive services (e.g. housing, transportation, job counseling, peer support groups, etc), are potentially important. We partner with an aftercare service provider in Pittsburgh and conduct an RCT where participants are incentivized to use their services at various intensities, i.e. 3 or 5 visits, to receive a reward. The intervention increased visits relative to the control group, and the composition of services used evolved from basic needs to employment and group support as the number of visits increased. However, meeting the 3-visit goal is attainable, whereas participants give up trying to visit 5 times. This is reflected in preliminary results on re-arrests. Modest encouragement is more successful at reducing recidivism.

17:00 - 17:30

Breakout discussion on motivating expert volunteers with with Chris Deluzio (Pitt Cyber) and Jana Gallus, (UCLA Anderson); on foster care assignments with Joseph Doyle (MIT Sloan) and Mike Shaver (Children’s Home & Aid); on nutritional aid with Erin Henderlight (Benefits Data Trust) and Osea Giuntella (University of Pittsburgh); on post-incarceration services with Deacon Keith Kondrich (Foundation of HOPE), Rev. Anais Hussian (Foundation of HOPE) and Jennifer Doleac (Texas A&M Justice Tech Lab).

17:45 - 18:30

Conference Social in Gather featuring The Decentralization Crossword set by the New York Times crossword setter Natan Last.

You can also:

  • Play best response to every hand at the poker table

  • Practice backward induction in a game of chess

  • Chat with Scott E. Page.

Bring your own beers.


Day 2 - Friday April 16

Organizers: Alex Teytelboym, University of Oxford, and Daniel Pieratt, University of Pittsburgh

The current refugee crisis presents a huge scope for (re)design of institutions. This module brings together academics across disciplines as well as practitioners working on refugees issues on an international, national and local levels. In addition to a presentation and a panel, we will host three problem-solving sessions to help academics get acquainted with challenges facing refugees and practitioners and to allow practitioners to tap into the expertise of the academic community.

9:00 - 10:00

Presentation Talk: Tommy Andersson, Lund University

10:00 - 11:00

Panel with practitioners:

Alexander Betts (Oxford)

Kheir Mugwaneza (Senior Project Manager, Center For Inclusion Health at Allegheny Health Network)

Alicia Wrenn (Senior Director, HIAS)

11:00 - 12:00

Problem solving sessions moderated by:

Ariel Procaccia (Harvard University)

Alexander Teytelboym (University of Oxford)

Irene Lo (Stanford University)

Rediet Abebe (UC Berkeley)

12:00 - 12:30

Conference Social in Gather (Room open 12:00-14:00)


Organizer: Mallory Avery, University of Pittsburgh

Various mechanisms have been proposed to alleviate disparities in education, each with its own intended and unintended consequences. Follow-up discussions to these theoretical talks will address connection to policy in practice.

14:00 - 14:45


Keynote Talk: A Market Design Solution to Unfair Distribution of Teachers in Schools

Presented by: M. Utku Unver, Boston College

Co-Author(s): Umut Dur, North Carolina; Olivier Tercieux, CNRS & PSE; Camille Terrier, HEC Lausanne

14:45 - 15:30


Dropping Standardized Testing for Admissions: Differential Variance and Access

Presented by: Faidra Monachou, Stanford University

Co-Author(s): Nikhil Garg, Stanford University; Hannah Li, Stanford University

The University of California recently suspended through 2024 the requirement that California applicants submit SAT scores, upending the major role standardized testing has played in college admissions. We study the impact of this decision and its interplay with other policies (such as affirmative action) on admitted class composition. We develop a market model with schools and students. Students have an unobserved true skill level, a potentially observed demographic group membership, and an observed application with both test scores and other features. Bayesian schools optimize the dual-objectives of admitting (1) the "most qualified" and (2) a "diverse" cohort. They estimate each applicant's true skill level using the observed features and potentially their group membership, and then admit students with or without affirmative action. We show that dropping test scores may exacerbate disparities by decreasing the amount of information available for each applicant. However, if there are substantial barriers to testing, removing the test improves both academic merit and diversity by increasing the size of the applicant pool. We also find that affirmative action alongside using group membership in skill estimation is an effective strategy with respect to the dual-objective. Findings are validated with calibrated simulations using cross-national testing data.

15:30 - 16:15


Affirmative Action in College Admissions: Motivations, Implications, and Root Issues

Presented by: Brent Hickman, Washington University in St. Louis

Co-Author(s): Aaron L. Bodoh-Creed, University of California, Berkeley

We estimate a model of a college admissions contest with affirmative action (AA) where students compete for seats at better schools by choosing pre-college human capital (HC) investments. We are able to identify and flexibly estimate the contest structure in the college admissions market, including the tendency for AA to affect admissions profiles on different segments of the US college quality spectrum. We identify the effects of college quality, pre-college HC, and unobserved student characteristics on post-college income using a control function derived using methods from the auctions literature. College quality is the primary income determinant, while unobserved student characteristics play a secondary role. Pre-college HC affects income indirectly through its influence on graduation probability and enrollment. We also estimate the fraction of pre-college HC “wasted” by the rat-race nature of the admissions contest, and run counterfactuals of admissions, graduation rates, and post-college income race gaps under alternate admissions schemes not observed in the data.

16:15 - 17:00


Affirmative Action with Overlapping Reserves

Presented by: M. Bumin Yenmez, Boston College

Co-Author(s): Tayfun Sonmez, Boston College

Affirmative action policies provide a balance between meritocracy and equity in a wide variety of real-life resource allocation problems. We study choice rules where meritocracy is attained by prioritizing individuals based on merit, and equity is attained by reserving positions for target groups of disadvantaged individuals. Focusing on overlapping reserves, the case where an individual can belong to multiple target groups, we characterize choice rules that satisfy maximal compliance with reservations, elimination of justified envy, and non-wastefulness. When an individual accommodates only one of the reserved positions, the horizontal envelope choice rule is the only rule to satisfy these three axioms. When an individual accommodates each of the reserved positions she qualifies for, there are complementarities between individuals. Under this alternative convention, and assuming there are only two target groups, such as women and minorities, paired-admissions choice rules are the only ones to satisfy the three axioms. Building on these results, we provide improved mechanisms for implementing a variety of recent reforms, including the 2015 school choice reform in Chile and 2012 college admissions reform in Brazil.

17:00 - 17:20

Breakout Discussions on centralized markets for teachers with Jonah Rockoff (Columbia) and Camille Terrier (HEC Lausanne); on the impact of standardized testing with M. Najeeb Shafiq (University of Pittsburgh) and Caterina Calsamiglia (IPEG); on policy alternatives to affirmative action with Umut Dur (North Carolina State University) and Neeraja Gupta (University of Pittsburgh); on best uses for affirmative action with Scott Duke Kominers (Harvard), and Julien Combe (Ecole Polytechnique).

Day 3 - Saturday April 17

Organizer: Prottoy Akbar, University of Pittsburgh

A substantial barrier to assisting vulnerable populations is the effective targeting of that assistance. This module proposes methodological innovations to improve targeting outcomes in the design of experiments, subsidies, and waitlists.

9:00 - 9:45


Incorporating Ethics and Welfare into Randomized Experiments

Presented by: Yusuke Narita, Yale University

Randomized Controlled Trials (RCTs) enroll hundreds of millions of subjects and involve many human lives. To improve subjects’ welfare, I propose a design of RCTs that I call Experiment-as-Market (EXAM). EXAM produces a Pareto efficient allocation of treatment assignment probabilities, is asymptotically incentive compatible for preference elicitation, and unbiasedly estimates any causal effect estimable with standard RCTs. I quantify these properties by applying EXAM to a water cleaning experiment in Kenya (Kremer et al., 2011). In this empirical setting, compared to standard RCTs, EXAM improves subjects’ predicted well-being while reaching similar treatment effect estimates with similar precision.

9:45 - 10:30


Pricing People into the Market: Targeting through Mechanism Design

Presented by: Terence Johnson, University of Virginia

Co-Author(s): Molly Lipscomb, University of Virginia

Subsidy programs are typically accompanied by large costs due to the difficulty of screening those who should receive the program from those who would have purchased the good anyway. We design and implement a platform intended to increase the take-up of improved sanitation services by targeting the poorest households for subsidies and using purchases by the wealthy households to increase available subsidies to the poor. We develop a theoretical model designed to isolate the key factors of concern in designing the pricing system. The field project then proceeds in two stages: we first create a demand model based on market data and a demand elicitation experiment, and use the model to predict prices that will maximize take-up subject to an expected budget constraint. We then test the modeled prices on a new sample of households. The treatment led to an increase in market share of mechanical desludging of 4.4 percentage points. The decreased probability of purchasing a manual desludging among those with the largest subsidies was 7.6-8.2 percentage points leading to a market share increase of mechanical desludging of 7.9-9.6 percentage points in that group. The health impacts among neighborhoods with many poor households were large: a 10% increase in the number of poor households in a treatment neighborhood meant that there was a 2.2 percentage point larger decrease in diarrhea. We compare the outcomes of the pricing treatment with alternative targeting methods and pricing structures and show that the pricing treatment outperforms proxy means testing, auctions with perfect pass-through of costs, and straight subsidies on the basis of take-up and/or budget sustainability.

10:30 - 11:15

Design of Lotteries and Waitlists for Affordable Housing Allocation

Presented by: Nick Arnosti, Columbia University,

Co-Author(s): Peng Shi, University of Southern California

We study a setting in which dynamically arriving items are assigned to waiting agents, who have heterogeneous values for distinct items and heterogeneous outside options. An ideal match would both target items to agents with the worst outside options, and match them to items for which they have high value. Our first finding is that two common approaches – using independent lotteries for each item, and using a waitlist in which agents lose priority when they reject an offer – lead to identical outcomes in equilibrium. Both approaches encourage agents to accept items that are marginal fits. We show that the quality of the match can be improved by using a common lottery for all items. If participation costs are negligible, a common lottery is equivalent to several other mechanisms, such as limiting participants to a single lottery, using a waitlist in which offers can be rejected without punishment, or using artificial currency. However, when there are many agents with low need, there is an unavoidable tradeoff between matching and targeting. In this case, utilitarian welfare may be maximized by focusing on good matching (if the outside option distribution is light-tailed) or good targeting (if it is heavy-tailed). Using a common lottery achieves near-optimal matching, while introducing participation costs achieves near-optimal targeting.

11:15 - 11:45

Breakout discussions on deploying adaptive designs in experiments with with Anja Sautmann (World Bank), Maximilian Kasy (Oxford University), and Bibhas Chakraborty (Duke-NUS); on non-pecuniary options for aid programs with Michael Hamilton (University of Pittsburgh), Vivek Bhattacharya (Northwestern University), and Dilip Mookherjee (BU); on allocation of public housing with Daniel Waldinger (NYU) and Neil Thakral (Brown).

Organizer: Robizon Khubulashvili, University of Pittsburgh

We present three papers on the current advances in classic mechanism design.

12:00 - 12:10

Introduction: Chris Shannon, University of California, Berkeley

12:10 - 12:50


Informationally Simple Implementation

Presented by: Agathe Pernoud, Stanford University

Co-Author(s): Simon Gleyze, Paris School of Economics

A designer wants to implement a social objective that depends on an unobserved state of the world. Unlike the standard approach, agents do not have ex-ante private information but can acquire costly information on their preferences. The choice of the mechanism generates informational incentives as it affects the information that players choose to acquire before play begins. A mechanism is informationally simple if agents have no incentive to learn about others’ preferences. This endogenizes an “independent private value” property of the interim information structure. We show that a mechanism is informationally simple if and only if it is dictatorial. This holds for generic environments and any smooth cost function which satisfies an Inada condition. Hence even strategy-proof mechanisms incentivize agents to learn about others and require to hold beliefs about opponents’ play. We then show that lack of informational simplicity may induce a new type of rent—even when the cost of information is vanishing—due to incomplete information aggregation. Informationally Simple implementation which could restore strategic-simplicity and avoid this rent is limited. Finally we show that even though agents’ information is endogenously correlated full surplus extraction à la Crémer-McLean is impossible, and we investigate how sequential mechanisms can sometimes restore informational simplicity.

12:50 - 13:30

Learning with Limited Memory: Bayesianism vs Heuristics

Presented by: Tai-Wei Hu, University of Bristol

Co-Author(s): Kalyan Chatterjee, Penn State University

We study the classical sequential hypothesis testing problem (Wald, 1950), but add the memory constraint modelled by finite automata. Generically, the constrained optimum by Bayes rule is impossible to implement with any finite-state automata. We then introduce stochastic finite-state automata with memory constraint and study the constrained optimal rule. Two classes of information structure are considered: the model of breakthroughs in which one signal fully reveals the state of nature but not others, and the more symmetric model where two signals are of similar strengths. In the first, randomization is strictly optimal whenever the memory constraint is binding and the optimum requires some learning. In the second, randomization is not optimal but the optimal finite-automaton uses qualitative probabilities.

13:30 - 14:10


Adapting the Groves-Ledyard Mechanism to Reduce Failures of Individual Rationality

Presented by: Paul J. Healy, Ohio State University

Co-Author(s): Renkun Yang, Ohio State University


14:10 - 14:40

Module F discussion with the presenters, Doron Ravid (University of Chicago) and John Ledyard (Caltech).

14:40 - 15:00

Closing Remarks