Changhwa Lee


Hello! I joined the University of Bristol in Fall 2023 as a lecturer (assistant professor). I received my Ph.D. from the University of Pennsylvania where I am fortunate to be advised by Rakesh Vohra and George Mailath.

I am interested in mechanism and information design, and applying them to platforms, industrial organizations, (algorithimic) discrimination and loosely defined market design. I am also a research affiliate at GRAPE. 


Working Papers

Intermediaries such as Amazon and Google recommend products and services to consumers for which they receive compensation from the recommended sellers. Consumers will find these recommendations useful only if they are informative about the quality of the match between the sellers' offerings and the consumer’s needs. The intermediary would like the consumer to purchase the product from the recommended seller, but is constrained because consumers need not follow the recommendation. I frame the intermediary’s problem as a mechanism design problem in which the mechanism designer cannot directly choose the outcome, but must encourage the consumer to choose the desired outcome. I show that in the optimal mechanism, the recommended seller has the largest non-negative virtual willingness to pay adjusted for the cost of persuasion. The optimal mechanism can be implemented via a handicap auction.

I use this model to provide insights for current policy debates. First, to examine the impact of the intermediary’s use of seller data, I identify types of seller data that lead to benefit or harm to the consumer and sellers. Second, I find that the optimal direct mechanism protects consumer privacy, but consumer data is leaked to sellers under other implementations. Lastly, I show that the welfare-maximizing mechanism increases the consumer surplus, but reduces the joint profit of the intermediary and sellers relative to the revenue-maximizing mechanism.

Many allocation problems can be recast as designing membership. The defining feature of membership as an economic good is that its value depends on who is a member. We introduce a framework for optimal membership design by combining an otherwise standard mechanism-design model with allocative externalities that depend flexibly on agents’ observable and unobservable characteristics. Our main technical result characterizes how the optimal mechanism depends on the pattern of externalities. Specifically, we show how the number of distinct membership tiers---differing in prices and potentially involving rationing---is increasing in the complexity of the externalities. This insight may help explain a number of mechanisms used in practice to sell membership goods, including artists charging below-market-clearing prices for concert tickets, heterogeneous pricing tiers for access to digital communities, the use of vesting and free allocation in the distribution of network tokens, and certain admission procedures used by colleges concerned about the diversity of the student body.

The marginal outcomes test (Becker (2010)) has become a `go-to test' of (un-)fairness or disparate impact in classification or allocation settings. We consider settings with two key properties: (1) the underlying attribute of the agent being classified is strategically chosen by the agent, and (2) the adjudicator/ institution commits to a rule or a policy, taking into account strategizing by the agent. In this setting, we show the outcome test is misspecified: the optimal rule will result in different marginal outcomes across demographics, even in the absence of any discriminatory motive for the principal. We derive a correctly specified test in such a setting. The test statistic requires estimation of both marginal and average outcomes - the latter portion captures the effect on agents' incentives. Under additional assumptions, we identify the direction of misspecification for the classical marginal outcomes test.  

Papers in Refereed Conference Proceedings

There is increasing regulatory interest in whether machine learning algorithms deployed in consequential domains (e.g. in criminal justice) treat different demographic groups "fairly.'' However, there are several proposed notions of fairness, typically mutually incompatible. Using criminal justice as an example, we study a model in which society chooses an incarceration rule. Agents of different demographic groups differ in their outside options (e.g. opportunity for legal employment) and decide whether to commit crimes. We show that equalizing type I and type II errors across groups is consistent with the goal of minimizing the overall crime rate; other popular notions of fairness are not.

We show how to achieve the notion of "multicalibration" from Hébert-Johnson et al. [2018] not just for means, but also for variances and other higher moments. Informally, it means that we can find regression functions which, given a data point, can make point predictions not just for the expectation of its label, but for higher moments of its label distribution as well-and those predictions match the true distribution quantities when averaged not just over the population as a whole, but also when averaged over an enormous number of finely defined subgroups. It yields a principled way to estimate the uncertainty of predictions on many different subgroups and to diagnose potential sources of unfairness in the predictive power of features across subgroups. As an application, we show that our moment estimates can be used to derive marginal prediction intervals that are simultaneously valid as averaged over all of the (sufficiently large) subgroups for which moment multicalibration has been obtained. 

Work in Progress