MAXWELL ROSENTHAL

Hello, I'm an economist. This fall, I'll join the faculty in the School of Economics at the Georgia Institute of Technology as an assistant professor. I work primarily on prior-free approaches to mechanism design and contract theory, but also maintain interests in other areas of both theoretical (e.g. behavioral) and applied (e.g. environmental) economics.

You can reach me via e-mail at rosenthal@gatech.edu. Below: projects. Click the title for a current draft.

This paper develops a data-driven approach to multidimensional screening. The principal observes a population of decision makers each choose from a finite number of exogenously-specified sets of allocations, and her beliefs about the agent's preferences are informed by this data. In my model, there are a multiplicity of preference distributions that are consistent with the principal's observations. Rather than assign privilege to any one distribution, she evaluates mechanisms by computing their worst-case payoff against the set of distributions that are compatible with the choice data. I show that there are circumstances in which the principal can do better than using a mechanism that recreates one of the choice environments in her data set, even when she knows nothing about the agent's preferences beyond what's implied by the data.

This paper supersedes an earlier version titled Robust Contracting with Uncertain Risk Preferences.

This paper studies a general moral hazard problem in which the principal is uncertain about the agent's risk preferences and his production technology. In addition to choosing how much effort to exert, the agent might also choose from a variety of safe and risky actions. The principal seeks a contract that performs well regardless of the agent's preferences and technology.

This is a demanding criterion: fully-contingent contracts --- e.g., contracts that reward the agent with wages that are strictly increasing in the output that he produces --- do not guarantee the principal a payoff that is larger than her payoff if the agent shirks, even if effort is costless for the agent. Conversely, contracts with transfers that do not vary when output is very small protect the principal from severe risk-aversion, and contracts with transfers that do not vary when output is very large protect the principal from severe risk-seeking. Thus, I identify virtues of these partially-contingent contracts, which are widely used in practice.

This paper studies the role of stochastic contracts in aligning an agent's unobserved risk-taking behavior with a principal's preferences. In a departure from the existing literature, the principal in our model does not know the agent's risk preferences: instead, she is at least slightly uncertain. I characterize the set of risk-aligned contracts under which the agent chooses risks as if his goal were to maximize the principal's payoff. All risk-aligned contracts are stochastic. I exhibit a general contracting environment in which these contracts are worst-case optimal.

I study a contracting environment in which there are repeated interactions between a time-inconsistent agent who does not completely understand his own future behavior and a better-informed principal. Although the agent’s initial beliefs are incorrect, he learns to more accurately forecast his future behavior by inspecting his own choice history. However, the principal is able to manipulate the evolution of the agent’s beliefs by selectively pooling agent types, and this mechanic is the emphasis of the paper. I conclude that, in many circumstances, learning does little to protect the agent or to promote efficiency. Furthermore, if the agent’s beliefs initially reflect some degree of pessimism, his ability to learn can actually leave him worse off in the long run. I show that while competition between principals protects the agent with a favorable up-front transfer, the critical inefficiencies demonstrated in the monopoly case still apply, with particularly inefficient contracts offered in early periods. Finally, I conclude with an analysis of restrictions to the allowable contract space that improve social welfare and facilitate learning by the agent.

Valuing Science Policy: Dynamic Decision-Making With Generalized Bayesian Learning [with Ivan Rudik and Derek Lemoine]

Available on request.