Publications
The Effect of Mergers on Innovation (with Kaustav Das and and Tatiana Mayskaya)
American Economic Journal: Microeconomics, forthcoming (SSRN link)
Delegation of Learning from Multiple Sources of Information (with Angelos Diamantopoulos)
European Economic Review, 104981 (2025), doi: 10.1016/j.euroecorev.2025.104981
Imposing Commitment to Rein in Overconfidence in Learning (with Marcelo Ariel Fernandez and Tatiana Mayskaya)
Games and Economic Behavior, 144 (2024), doi: 10.1016/j.geb.2024.01.001
The Dark Side of Transparency: When Hiding in Plain Sight Works (with Tatiana Mayskaya)
Journal of Economic Theory, 212: 105699 (2023), doi: 10.1016/j.jet.2023.105699
Dynamic project selection (with Romans Pancs)
Theoretical Economics 13, 115–144 (2018), doi: 10.3982/te2379
Conjugate information disclosure in an auction with learning (with Romans Pancs)
Journal of Economic Theory, 171: 174–212 (2017), doi: 10.1016/j.jet.2017.06.006
Work in Progress
Data Linkage between Markets: Hidden Dangers and Unexpected Benefits (with Claudia Herresthal and Tatiana Mayskaya)
A company uses consumer data from product sales to offer personalized insurance. When consumers are predominantly of high risk, data linkage between product and insurance markets benefits both high- and low-risk consumers by generating efficiency gains in the insurance market, which are partially passed on to consumers via the product market. When consumers are predominantly of low risk, data linkage can harm both types. High-risk consumers lose rents in the insurance market, while low-risk consumers face negative externalities from sophisticated high-risk consumers via the product market.
Ordering Data to Persuade (with Steven Kivinen and Tatiana Mayskaya)
A Sender must disclose a string of privately observed signals to a receiver with listening costs. The Sender can choose the order of the signals. We show that a strategy of alternating between favorable and unfavorable signals can dominate a strategy of "front-loading" favorable signals.
We study a dynamic model in which a principal can gradually transfer knowledge at no cost to an agent to increase his productivity. The agent exerts costly, unobservable effort to complete a task. Although knowledge sharing is efficient, a principal with commitment may deliberately limit training to deter the agent from procrastinating. Small differences in the cost of effort or the learning speed can lead to starkly different outcomes: an agent with a higher cost or slow learning rate is made minimally productive, while a slightly more efficient agent receives maximal training. We show that commitment has no value: in the principal-optimal PPE the principal trains the agent at a slow rate but achieves his commitment value. The results provide a novel rationale for inefficiently slow on-the-job training.
This paper explores whether large language models (LLMs) can learn predatory strategies in dynamic environments in which an incumbent faces repeated entry threat. Using OpenAI’s GPT-4.1 as decision-making agents, we find that LLMs learn to predate when both predation and accommodation are theoretically viable, and adopt aggressive strategies when only accommodation is theoretically viable. Further, profit optimization is limited, highlighting both strategic learning and its limitations.