Hello, I am an economist. I am primarily interested in microeconomic theory — especially mechanism design/contract theory — but have also worked in other areas of economics. Below: projects.
Risk Alignment [Click here for a current draft]
Contracts that punish or reward agents for outcomes that are beyond their control are ubiquitous. Executives are paid for observable luck. Firms use competition to motivate their salesforce. Organizations award bonuses to individuals on the basis of team performance. In all of these situations, the agent's compensation depends explicitly on variables other than his own output.
The celebrated informativeness principle establishes circumstances in which optimal contracts for moral hazard condition transfers to the agent only on signals that are informative about effort. Contracts that are contingent on uninformative signals expose the agent to additional risk without providing more powerful incentives, and risk-averse agents demand compensation from the principal in return. Consequently, individuals should only be rewarded on the basis of variables other than their own performance if doing so reduces risk for the agent, as in common-shock tournaments and relative performance evaluation. However, evidence on the use of relative performance evaluations in executive compensation is mixed, salespeople are assigned to territories, and team members undertake independent tasks.
Why then might we observe contracts that condition transfers on uninformative signals? This paper provides a unified explanation that does not rely on the details of the underlying contracting environment or on correlated measures of performance: if the agent's risk preferences are unknown, these contingent contracts are the only contracts that align the agent's risk-taking behavior with the principal's own preferences.
Robust Contracting with Uncertain Risk Preferences [Click here for a current draft]
Formerly circulated as Robust Incentives for Diversely Risk-Averse Agents.
I study a general moral hazard problem in which the agent's risk preferences are unknown to the principal. In addition to choosing how much effort to exert, the agent might also choose from a variety of safe and risky actions for each level of effort. The principal seeks a contract that is robust to her uncertainty about the agent's preferences and production technology. Many contractual forms that are predicted by economic theory do not perform well in this environment. In particular, fully-contingent contracts do not guarantee the principal a payoff that is larger than her payoff if the agent shirks. Conversely, partially-contingent contracts that are constant at the bottom and the top of the range of feasible outputs exhibit robustness to risk-taking. I provide foundations for the abundant use of these contracts in practice, which is not theoretically well-understood.
Dynamic Contracting with Optimistic Agents [Click here for a current draft]
I study a contracting environment in which there are repeated interactions between a time-inconsistent agent who does not completely understand his own future behavior and a better-informed principal. Although the agent’s initial beliefs are incorrect, he learns to more accurately forecast his future behavior by inspecting his own choice history. However, the principal is able to manipulate the evolution of the agent’s beliefs by selectively pooling agent types, and this mechanic is the emphasis of the paper. I conclude that, in many circumstances, learning does little to protect the agent or to promote efficiency. Furthermore, if the agent’s beliefs initially reflect some degree of pessimism, his ability to learn can actually diminish his long-run average payoff. I show that while competition between principals protects the agent with a favorable up-front transfer, the critical inefficiencies demonstrated in the monopoly case still apply, with particularly inefficient contracts offered in early periods. Finally, I conclude with an analysis of restrictions to the allowable contract space that improve social welfare and facilitate learning by the agent.
General Bayesian Learning in Dynamic Stochastic Models: Estimating the Value of Science Policy, with Ivan Rudik and Derek Lemoine. [Draft available shortly]