Large Language Models, Price Discovery, and the Limits to Adoption
(Draft available upon request)
Presentations: AsianFA (scheduled), UC Berkeley (scheduled), Durham Conference for Finance Job Market Papers (scheduled), FMARC Doctoral Consortium (scheduled), Hong Kong Monetary Authority, FMA Asia Doctoral Student Consortium 2025, the Gillmore Center Fintech PhD Workshop 2025, Warwick Business School
Abstract
Large language models (LLMs) are increasingly used as informational intermediaries in financial markets. This paper studies whether they improve market efficiency, whether these gains are monotone as adoption spreads, and whether they narrow the performance gap between institutional and retail investors. I develop a model in which LLM adoption improves the informational content of order flow but also increases correlated trading errors, generating a hump-shaped relation between adoption and market efficiency. Using exogenous ChatGPT outages as quasi-natural experiments, I show that variance ratios temporarily rise during outages, implying slower price discovery. I then use outage-based variation to construct firm-level ChatGPT exposure and show that more exposed firms experience larger post-launch efficiency gains, but that these gains are non-monotone and peak about 10.5 months after launch. Finally, retail trading profitability improves relative to institutional benchmarks, suggesting that LLMs reduce traditional informational asymmetries.