I am an Assistant Professor of Finance at the Mendoza College of Business at the University of Notre Dame. I study Empirical Asset Pricing, Financial Market Design, and Market Microstructure.

jshim2@nd.edu

CV (updated Feb 2021)



Working Papers

ETFs, Illiquid Assets, and Fire Sales [Link]

with Karamfil Todorov

New, July 2021

Abstract: We document several novel facts about exchange-traded funds (ETFs) holding corporate bonds. First, the portfolio of bonds that are exchanged for new or existing ETF shares (called creation or redemption baskets) often represents a small fraction of ETF holdings – a fact that we call “fractional baskets.” Second, creation and redemption baskets exhibit high turnover. Third, creation (redemption) baskets tend to have longer (shorter) durations and smaller (larger) bid-ask spreads relative to holdings. Lastly, ETFs with fractional baskets exhibit persistent premiums and discounts, which is related to the slow adjustment of NAV returns to ETF returns. We develop a simple model to show that an ETF’s authorized participants (APs) can act as a buffer between the ETF market and the underlying illiquid assets, and help mitigate fire sales. Our findings suggest that ETFs may be more effective in managing illiquid assets than mutual funds.

Do Electronic Markets Improve Execution If You Cannot Identify Yourself? [Link]

with Yenan Wang

New, March 2021

Abstract: We study two aspects of the transition to electronic markets for large uninformed traders: (1) the increase in trading frequency, and (2) the move to anonymous trading, which challenged the implementation of reputation-based strategies. We present a model of optimal execution where a risk-neutral trader must sell over multiple trading periods before a fixed deadline to several risk-averse market makers. We model trading frequency as the number of trading periods over a fixed time intervial, and reputationbased strategies as the ability to commit ex-ante to a sequence of market orders, what we refer to as scheduling. While scheduling and more trading periods are both valuable, a trader typically prefers scheduling over more trading periods. Moreover, the benefits of scheduling significantly increase when coupled with more frequent trading, and outweigh the potential costs associated with predatory trading against predictable orders. Our model suggests that transparent execution algorithms, which could be used to implement scheduling in anonymous electronic markets, may have important benefits for large, uninformed traders.

A Theory of Stock Exchange Competition and Innovation: Will the Market Fix the Market? [Paper Link] [Appendix Link]

with Eric Budish and Robin Lee

Revise and Resubmit, Journal of Political Economy, Feb 2021

Updated July 2020

Press: Bloomberg

Policy: SEC Commissioner

Abstract: This paper builds a model of stock exchange competition tailored to the institutional and regulatory details of the modern U.S. stock market. The model shows that under the status quo market design: (i) trading behavior across the seemingly fragmented exchanges is as if there is just a single synthesized exchange; (ii) as a result, trading fees are perfectly competitive; (iii) however, exchanges are able to capture and maintain economic rents from the sale of speed technology such as proprietary data feeds and co-location — arms for the high-frequency trading arms race. We document stylized empirical facts consistent with each of the three main results of the theory. We then use the model to examine the private and social incentives for exchanges to adopt new market designs, such as frequent batch auctions, that address the negative aspects of high-frequency trading. The robust conclusion is that private innovation incentives are much smaller than the social incentives, especially for incumbents who face the loss of speed technology rents. A policy insight that emerges from the analysis is that a regulatory “push,” as opposed to a market design mandate, may be sufficient to tip the balance of incentives and encourage “the market to fix the market.”

Arbitrage Comovement [Link]

updated May 2020

Abstract: I argue that arbitrage mistranslates factor information from ETFs to constituent securities and distorts comovement. The intuition behind this distortion is arbitrageurs trade constituent securities not based on their fundamental exposures but by their portfolio weights, causing securities to comove with the ETF based on a measure I call arbitrage sensitivity — a combination of portfolio weight and price impact sensitivity — rather than fundamental exposures. Arbitrage sensitivity predicts comovement between stock and ETF returns, especially in periods of high ETF volume and volatility, but not before 2008 when ETFs were not as heavily traded. Arbitrage-induced comovement leads to over-reaction to ETF returns for stocks more sensitive to arbitrage and under-reaction for those less sensitive. A long-short portfolio constructed based on arbitrage sensitivity generates an alpha of around 7.5% per year. Unlike most anomalies, arbitrage comovement is strongest in large-cap stocks, which are held by the most actively traded ETFs. Arbitrage comovement implies observed factor loadings are less reliable for assessing risk since they are at least partially driven by mechanical arbitrage trading instead of fundamental exposures.

Work in Progress



Bond ETFs are Different: Evidence from Baskets

with Karamfil Todorov

Published Papers



The High Frequency Trading Arms Race: Frequent Batch Auctions as a Market Design Response [Link]

with Eric Budish and Peter Cramton

Quarterly Journal of Economics, 2015

Online Appendix

Press: Financial Times, Bloomberg, Chicago Tribune, Bloomberg Editorial Board, The Economist

Awards: AQR Insight Award (First Prize), Utah Winter Finance Conference (Best Paper Award)

Policy: SEC Chair, European Commission, New York Attorney General

Abstract: The high-frequency trading arms race is a symptom of flawed market design. Instead of the continuous limit order book market design that is currently predominant, we argue that financial exchanges should use frequent batch auctions: uniform price double auctions conducted, for example, every tenth of a second. That is, time should be treated as discrete instead of continuous, and orders should be processed in a batch auction instead of serially. Our argument has three parts. First, we use millisecond-level direct-feed data from exchanges to document a series of stylized facts about how the continuous market works at high-frequency time horizons: (i) correlations completely break down; which (ii) leads to obvious mechanical arbitrage opportunities; and (iii) competition has not affected the size or frequency of the arbitrage opportunities, it has only raised the bar for how fast one has to be to capture them. Second, we introduce a simple theory model which is motivated by and helps explain the empirical facts. The key insight is that obvious mechanical arbitrage opportunities, like those observed in the data, are built into the market design—continuous-time serialprocessing implies that even symmetrically observed public information creates arbitrage rents. These rents harm liquidity provision and induce a never-ending socially wasteful arms race for speed. Last, we show that frequent batch auctions directly address the flaws of the continuous limit order book. Discrete time reduces the value of tiny speed advantages, and the auction transforms competition on speed into competition on price. Consequently, frequent batch auctions eliminate the mechanical arbitrage rents, enhance liquidity for investors, and stop the high-frequency trading arms race.

Implementation Details for Frequent Batch Auctions: Slowing Down Markets to the Blink of an Eye [Link]

with Eric Budish and Peter Cramton

American Economic Review Papers and Proceedings, 2014