Will the Market Fix the Market? A Theory of Stock Exchange Competition and Innovation [Link]
Updated May 6, 2019
Abstract: As of early 2019, there are 13 stock exchanges in the U.S., across which over 1 trillion shares ($50 trillion) are traded annually. All 13 exchanges use the continuous limit order book market design, a design that gives rise to latency arbitrage—arbitrage rents from symmetrically observed public information—and the associated high-frequency trading arms race (Budish, Cramton and Shim, 2015). Will the market adopt new market designs that address the negative aspects of high-frequency trading? This paper builds a theoretical model of stock exchange competition to answer this question. Our model, shaped by institutional and regulatory details of the U.S. equities market, shows that under the status quo market design: (i) trading behavior across the many distinct exchanges is as if there is just a single “synthesized” exchange, as opposed to traditional platform competition; (ii) as a result, trading fees are perfectly competitive; but (iii) exchanges capture and maintain signiﬁcant economic rents from the sale of “speed technology” (i.e., proprietary data feeds and co-location)—arms for the arms race. Using a variety of data, we document seven stylized empirical facts that suggest that the model captures the essential economics of how U.S. stock exchanges compete and make money in the modern era. We then use the model to examine the private and social incentives for market design innovation. We show that the market design adoption game among incumbent exchanges is not a coordination game, but rather a repeated prisoner’s dilemma. If an exchange adopts a new market design that eliminates latency arbitrage, it would win share and earn economic rents. However, imitation by other exchanges would result in an equilibrium that resembles the status quo with competitive trading fees, but now without the rents from the speed race. This means that although the social returns to market design innovation are large, the private returns are much smaller and may be negative, especially for incumbents that derive rents from the status quo. Despite this negative result, however, our analysis does not imply that a market-wide market design mandate is necessary to ﬁx the problem. Rather, it suggests a modest regulatory “push” may be suﬃcient to tip the balance of incentives and encourage “the market to ﬁx the market.”
Arbitrage Comovement [Link]
updated March 12, 2019
I argue that arbitrage mistranslates factor information from ETFs to constituent securities and distorts comovement. The intuition behind this distortion is arbitrageurs trade constituent securities not based on their fundamental exposures but by their portfolio weights, causing securities to comove with the ETF based on a measure I call arbitrage sensitivity – a combination of portfolio weight and price impact sensitivity – rather than fundamental exposures. Arbitrage sensitivity predicts comovement between stock and ETF returns, especially in periods of high ETF volume and volatility, but not before 2008 when ETFs were not as heavily traded. Arbitrage-induced comovement leads to over-reaction for stocks more sensitive to arbitrage and under-reaction for those less sensitive. A long-short portfolio constructed based on arbitrage sensitivity generates an alpha of around 7.5% per year. Unlike most anomalies, arbitrage comovement is strongest in large-cap stocks, which are held by the most actively traded ETFs. Arbitrage comovement implies observed factor loadings are less reliable for assessing risk since they are at least partially driven by arbitrage trading instead of fundamental exposures.
The High Frequency Trading Arms Race: Frequent Batch Auctions as a Market Design Response [Link]
Quarterly Journal of Economics, 2015
The high-frequency trading arms race is a symptom of ﬂawed market design. Instead of the continuous limit order book market design that is currently predominant, we argue that ﬁnancial exchanges should use frequent batch auctions: uniform price double auctions conducted, for example, every tenth of a second. That is, time should be treated as discrete instead of continuous, and orders should be processed in a batch auction instead of serially. Our argument has three parts. First, we use millisecond-level direct-feed data from exchanges to document a series of stylized facts about how the continuous market works at high-frequency time horizons: (i) correlations completely break down; which (ii) leads to obvious mechanical arbitrage opportunities; and (iii) competition has not affected the size or frequency of the arbitrage opportunities, it has only raised the bar for how fast one has to be to capture them. Second, we introduce a simple theory model which is motivated by and helps explain the empirical facts. The key insight is that obvious mechanical arbitrage opportunities, like those observed in the data, are built into the market design—continuous-time serialprocessing implies that even symmetrically observed public information creates arbitrage rents. These rents harm liquidity provision and induce a never-ending socially wasteful arms race for speed. Last, we show that frequent batch auctions directly address the ﬂaws of the continuous limit order book. Discrete time reduces the value of tiny speed advantages, and the auction transforms competition on speed into competition on price. Consequently, frequent batch auctions eliminate the mechanical arbitrage rents, enhance liquidity for investors, and stop the high-frequency trading arms race.