Publications
The Effect of Mergers on Innovation (with Kaustav Das and and Tatiana Mayskaya)
American Economic Journal: Microeconomics, forthcoming (SSRN link)
Delegation of Learning from Multiple Sources of Information (with Angelos Diamantopoulos)
European Economic Review, 104981 (2025), doi: 10.1016/j.euroecorev.2025.104981
Imposing Commitment to Rein in Overconfidence in Learning (with Marcelo Ariel Fernandez and Tatiana Mayskaya)
Games and Economic Behavior, 144 (2024), doi: 10.1016/j.geb.2024.01.001
The Dark Side of Transparency: When Hiding in Plain Sight Works (with Tatiana Mayskaya)
Journal of Economic Theory, 212: 105699 (2023), doi: 10.1016/j.jet.2023.105699
Dynamic project selection (with Romans Pancs)
Theoretical Economics 13, 115–144 (2018), doi: 10.3982/te2379
Conjugate information disclosure in an auction with learning (with Romans Pancs)
Journal of Economic Theory, 171: 174–212 (2017), doi: 10.1016/j.jet.2017.06.006
Work in Progress
Data Linkage between Markets: Hidden Dangers and Unexpected Benefits (with Claudia Herresthal and Tatiana Mayskaya)
A company uses consumer data from product sales to offer personalized insurance. When consumers are predominantly of high risk, data linkage between product and insurance markets benefits both high- and low-risk consumers by generating efficiency gains in the insurance market, which are partially passed on to consumers via the product market. When consumers are predominantly of low risk, data linkage can harm both types. High-risk consumers lose rents in the insurance market, while low-risk consumers face negative externalities from sophisticated high-risk consumers via the product market.
Ordering Data to Persuade (with Steven Kivinen and Tatiana Mayskaya)
A Sender must disclose a string of privately observed signals to a receiver with listening costs. The Sender can choose the order of the signals. We show that a strategy of alternating between favorable and unfavorable signals can dominate a strategy of "front-loading" favorable signals.
We study a dynamic moral hazard model without monetary transfers, in which a principal can gradually and costlessly transfer knowledge to raise an agent's productivity. Although transferring knowledge is efficient, the principal may deliberately limit it to deter the agent's procrastination, and this inefficiency persists even with infinite patience. Small differences in effort cost or learning rate can generate starkly different outcomes: one agent is made minimally productive, while the other receives maximal training. Commitment has no value for the principal, who can achieve her commitment-optimal payoff via a three-phase training with a mid-career dip, consistent with empirical evidence.
This paper explores whether large language models (LLMs) can learn predatory strategies in dynamic environments in which an incumbent faces repeated entry threat. Using OpenAI’s GPT-4.1 as decision-making agents, we find that LLMs learn to predate when both predation and accommodation are theoretically viable, and adopt aggressive strategies when only accommodation is theoretically viable. Further, profit optimization is limited, highlighting both strategic learning and its limitations.