Mainstream finance
(S) means a summary of the paper is available by clicking the down arrow button
(A) means an abstract is available by clicking the down arrow button
(S) means a summary of the paper is available by clicking the down arrow button
(A) means an abstract is available by clicking the down arrow button
Recent Working Papers
Summary:
This is the working paper about which I am currently most excited. The genesis is my nearly decade-long puzzlement at why buyer-side brokerage is so prevalent in various markets for expensive and highly illiquid assets. This is most notably the case with residential & commercial real estate, but also extends to art & musical instruments, M&A, private equity, etc. Much of the literature presumes that such intermediation provides buyers with information about the asset, or helps them find sellers. I couldn’t completely buy into that logic: Zillow has been around for a couple of decades and buyers use brokers as intensively now as they did twenty years ago despite there being an order of magnitude more information available to buyers for free via the internet. The other answer provided by academics (and the popular press) is that the industry is collusive. That, however, doesn’t explain why such intermediation exists across other countries and markets. Two observations, and an inspiring conversation with Brent Ambrose, paved the way to an answer that deviates from the standard thinking.
In teaching and interacting with commercial real estate professionals for well over a decade, I’ve come to appreciate that the highest bidder doesn’t always get the deal. What professionals in these highly illiquid markets co-prioritize with price is certainty of execution.
My own solo paper, “Asset-Level Risk and Return in Real Estate Investments”, published in the Review of Financial Studies in 2021, taught me (and hopefully others) that the vagaries of the transaction process, i.e., search and bargaining, contribute to a great deal of price risk.
The main idea is not a major leap from these two observations: What if intermediation helped to mitigate execution (and transaction) risk because brokers are part of a reputational network and can “vouch” for their clients or work to bring their clients back to the table if negotiations start to break down? This further clicks into place when one considers that brokers are typically compensated only if there is a transaction. In other words, perhaps they work for the transaction and not, as may be commonly thought, for their client. Knowing full well that attacking this problem would require a deep knowledge of game theory, I managed to recruit Brendan Daley to partner with me on formalizing this intuition.
We proceeded as follows. Imagine the seller of a property who is entertaining overtures from several prospective buyers. The seller must select one of the buyers and then attempt to transact with them. The catch is that the buyers’ overtures may not be committal and consummating a transaction takes time (e.g., because it involves a due diligence period). In particular, imagine a situation where a selected buyer may renege on any promises made to an unwitting seller during the selection stage and force the seller into a renegotiation. This is what buying a property through Craig’s List might be like. To model this, Brendan and I created a novel search market framework – one in which there is an explicit buyer selection stage followed by a settlement stage. In the Craig’s List setting of the model, a rational seller would randomly choose an arriving buyer because any price promises would simply be cheap talk. The seller would then have to bargain with their chosen buyer without knowing the buyer's true valuation. The bargaining solution to this amounts to giving the buyer a chance to make a “low ball” offer to the seller, which will be rejected half the time, or transact for sure with the seller at an average price that is optimally chosen by the seller.1 We show that, relative to a standard search model equilibrium in which sellers know buyers’ valuations when they bargain, one expects prices to be depressed, assets to be inefficiently allocated, and a significant proportion of transactions to fail. Because of this, sellers would be willing to pay to learn more about prospective buyers’ valuations.
Having demonstrated that information about buyers' intentions can be very valuable to sellers, we next consider two information revealing solutions to this seller problem: The first is certification, as might be provided by a buyer broker who credibly vouches for their clients.2 The second is soliciting committed bids, as in an auction, but with the twist that the seller doesn’t know how many buyers will show up to bid. Both solutions require contractual commitments which are absent in the Craig’s List setting but which are, in practice, adopted by markets. For instance, intermediation using real estate brokers is prevalent in the U.S., but selling through auctions is prevalent in Australia. The question is, if sellers choose the settlement mechanism, under what circumstances will they opt for certification through brokers over auctions? Our model’s answer is that an equilibrium with brokers will tend to prevail when there isn’t sufficient competition between bidders, and this will be the case when supply is large relative to demand (i.e., few participating buyers) or when buyers’ private valuations vary widely. We also establish that broker certification is most efficient when the brokers’ services are pre-paid (so that buyers do not hesitate to use them)—this has been the prevalent type of brokerage contract in the US, at least until the recent National Association of Realtors settlement. Finally, in crudely calibrating the model parameters to those of the residential US housing market we indeed confirm that a broker certification equilibrium would be prevalent when supply and demand is in balance.
This work, three years in the making, has opened up a host of new and original research questions. Something on which I will likely continue to work for several more years.
[1] In a recent AER article, Peski (2022) provides microfoundations for this bargaining solution (which can also be consistent with Myerson's 1984 EMA paper). Under symmetric information the solution reduces to standard Nash (1950) bargaining which is standard in search models.
[2] Brokers work within reputational networks. If a broker falsely vouches for the buyer they represent, they will suffer reputational harm (see works by, Fulghieri & Chemmanur, 1994 JF; 1994 RFS, and Lizzeri, 1999 Rand).
ABSTRACT: We find that a measure of aggregate corporate debt maturity choices strongly predicts real GDP growth. The new measure compares well with recently explored predictors of GDP, is no less robust/stable, and is distinct from spread-related variables. We develop a novel theory of firm debt maturity choice explaining these findings: In anticipation of inefficient firm operations during non-contractible negative expected profitability states, long-term lenders charge more interest. When choosing debt maturity, firms balance this against the higher cost of refinancing short-term debt. Maturity choices are more sensitive to profit anticipation whereas default spreads are more sensitive to profit dispersion.
ABSTRACT: We derive conditions on the noise and autocorrelation structure of a reduced form stationary VAR($p$) model that are necessary and sufficient for the impulse response function (IRF) to exhibit fixed sign and/or declining magnitude with horizon. The conditions link IRF restrictions to factorizing completely positive matrices --- a well studied subfield in applied linear algebra and optimization theory. Our results may be useful in identifying or testing for models with noise structure that can be economically interpreted.
ABSTRACT: Empirical stylized facts in the literature concerning “sin” versus “angel” stocks display asymmetry. Through an experiment, we examine whether such biases can be micro- founded via individuals’ preferences and belief formations. We find that negative environmental and social externalities have thrice the impact of positive externalities on investment choices. Further, negative externalities modestly increase pessimism about investment prospects while positive externalities have no discernible impact. The asymmetry is pervasive, heterogeneous, and comparable to the magnitude observed in loss-aversion. Beyond rationalizing stylized empirical facts, our findings should help direct the growing theoretical literature that models the implications of non-pecuniary individual investor behavior.
Published Papers
SUMMARY: For a "layperson's" description of what this paper is about, please see the short article linked here.
This paper seeks to explain various puzzling market features of real estate returns in repeat sales data. I demonstrate that the latter exhibit anomalous scaling behavior and argue this is because repeat sales data suffer from selection bias when private valuations are persistent: Short holding periods mostly reflect investors who were lucky twice because they bought from someone who didn’t value the asset highly, and were soon after able to chance upon an investor who valued the asset more than they did. Meanwhile, long holding periods mostly reflect the experience of investors who were not as lucky because they were among the highest valuation buyers. It took them a long time to sell because it takes a long time for their private valuation to regress to the mean, by which time if they sell it would often be to someone who values the asset no more than they originally did. I show that a calibrated partial equilibrium search model incorporating these intuitions can explain a large number commercial real estate return characteristics. The model also qualitatively applies to residential real estate and private equity transactions.
SUMMARY: Economists have been fascinated with energy prices and their impact on the macro economy since at least the oil crisis of 1973. In their influential paper testing Ross’ (1976) Arbitrage Pricing Theory or “APT”, Chen, Roll and Ross (1986) were the first to posit that oil price risk cannot be diversified away and exposure to it should therefore command a risk premium. The empirical evidenced amassed since then has been decidedly unconvincing. We take the view that there is no “monolithic” notion of oil price risk to which other assets might or might not be exposed. Rather, investors can be legitimately concerned about various types of distinct oil price risks: temporary (or short-term) shocks, persistent shocks, long-term shocks (those only affecting supply and demand considerations in the distant future), and volatility shocks. We demonstrate that one can decompose a four-factor affine term-structure model with unspanned stochastic volatility into exactly such a hierarchy of shocks. We then estimate this model (via MCMC) using data from oil futures, oil options, and oil stock portfolios. Using oil stocks is important because they contain information that may not be contained in those derivatives that are frequently traded. We demonstrate that by employing oil stocks, the resulting estimated oil factors have significant explanatory power for non-oil stocks. Non-oil stocks are typically exposed to between 14% and 20% of the oil risk to which the oil industry itself is exposed. This amounts to an average oil risk premium of 0.70% for the typical non-oil stock. It is important to note that the standard methodology for picking up risk premia (via Fama-MacBeth regressions or using noisy proxy portfolios) cannot typically pick up a risk premium of this magnitude. We succeed in identifying the risk premia because we combine information from various asset markets. The APT provides the intuition that a handful of systematic factors should suffice to account for all priced risk. While we only focus on one fundamental source of systemic risk (i.e., energy), our methodology can be applied more broadly and paves the way to a more structured approach to the pricing of securities in an APT framework.
Summary: The tradeoff theory of the closed-end fund discount, as developed in my 2009 paper with Cherkes and Stanton and in Berk and Stanton (2007), posits that a closed-end fund’s price premium to NAV corresponds to the present value of the benefits provided by the fund structure and managerial skill less the liability posed by the cost imposed by management through fees and misaligned interests. When a fund is trading at a discount, free-ridership among shareholders entrenches management, thereby exacerbating the discount in equilibrium. The presence (or potential presence) of activist shareholders, on the other hand, reduces the equilibrium discount because of the threat of fund liquidation. Managers, on the other hand, are not passive onlookers and can take action to manage the size of the discount and, therefore, the likelihood that an activist might get involved. One way to do this is by committing to an increase in dividends (effectively, a slow liquidation of the fund) – i.e., a managed distribution policy. In this paper we provide empirical evidence for a model-deduced equilibrium interaction between managerial choices, activist shareholder choices, and the closed-end fund discount. Relative to the total market size of all traded closed-end funds in the U.S., there is an astonishing amount of academic focus on this investment vehicle. Our paper provides further evidence for the tradeoff-based, rather than behavioral, approach to understanding this market.
SUMMARY: On October 11, 2010, the NASDAQ began disseminating calculations of NASDAQ OMX Alpha Indexes. These are proprietary relative performance indices each of which tracks the relative performance of a target traded security (e.g., Apple Computers) against that of a benchmark (say, the S&P 500 ETF). On April 18, 2011 trading began in Alpha Index Options. To date about 16,000 have been traded, corresponding to $800M in notional terms. Bob and I designed these indices, hoping that they will appeal both to investors who wanted to focus on firm specific performance without having to worry about timing the market, and to quants who wanted to trade correlation. As Marty Gruber said in an NYU conference dedicated to Alpha Indexes, “…if you’re on an actively managed mutual fund board, the first thing you hear about is relative performance. That’s all directors talk about. That’s all portfolio managers talk about.” In this paper, we provide the motivation for trading in such indices, the valuation analysis for futures and options, and the hedging ratios for risk management. One of the most important aspects of these products is that one can back out implied correlations from index option price. In particular, one can calculate forward-looking measures of betas. This promises a vast improvement on the risk-return analysis relative to current approaches (which primarily rely on regressions using historical data).
Summary: This paper introduces the methodology of “text regressions”. Simply described, the idea is similar to how spam filters operate, but instead of forecasting whether a document is spam or not (a binary forecast), our algorithm forecasts return volatility (a continuous forecast). We download 10-k reports from the SEC and isolate the Managers’ Discussion and Analysis section from each. As long recognized by the accounting literature, these sections contain forward-looking statements and are therefore candidates for testing whether mandatory reports contain information that is useful to investors. We find that the algorithm succeeds in forecasting firms’ return volatilities out of sample as well, and sometimes better, than forecasts based on realized volatility history. We follow this up with a paper that analyzes the data in more detail and concludes that in the period subsequent to the passage of the Sarbanes-Oxley Act of 2002 MD&A sections in 10-k reports have become more informative about firm risk, as measured by future realized volatility.
SUMMARY: A closed-end fund (CEF) is a publicly traded firm that invests in securities. While investors can, in principle, trade either in the CEF's shares or directly in the underlying securities, a CEF rarely trades at a price equal to the value of the securities it holds (its Net Asset Value, or NAV). CEFs usually trade at a discount to NAV, though it is not uncommon for them to trade at a premium. The existence and behavior of this discount, usually referred to collectively as the “closed-end fund puzzle'”, poses one of the longest standing anomalies in finance: Why do CEFs generally trade at a discount, and why are investors willing to buy a fund at a premium at its IPO, knowing that it will shortly thereafter fall to a discount?
Our paper develops a rational, liquidity-based model of CEFs that provides an economic motivation for the existence of this organizational form: CEFs offer a means for investors to buy illiquid securities, without facing the potential costs associated with direct trading and without the externalities imposed by an open-end fund structure. In the paper, we first establish that a liquidity rationale for the existence of CEFs is indeed present in the data. We then develop a model based on this insight in which there is a tradeoff between the liquidity benefits of investing in the CEF and the fees charged by the fund's managers. In particular, the model predicts that IPOs will occur in waves in certain sectors at a time, that funds will be issued at a premium to net asset value (NAV), and that they will later usually trade at a discount.
We also collect data from a rich variety of sources in order to investigate the model both qualitatively and quantitatively. Overall, we find support for both the liquidity motive for issuing CEFs and the predicted patterns in the premium/discount. Moreover, we find little or no support for an alternative explanation based on investor sentiment. However, we do document one feature of the data that our model cannot explain: the return of newly issued funds under-performs that of seasoned CEFs, though only in CEFs managing fixed-income securities. This overpricng at the IPO suggests that a full explanation of the discount may also require behavioral considerations.
Summary: Among the hundreds of papers on return momentum, there are a few theoretical papers that explain firm-level return-momentum; they tend not, however, to tie the effect to firm-specific observables. There are also a few papers that show the possibility of momentum portfolio strategy profits; they tend not, however, to recreate the correct magnitude or term-structure of the phenomenon. Our paper does several things that no prior theory paper has managed to do: (i) tie the momentum effect to firm specific attributes, (ii) use that to produce realistic momentum strategy profits, (iii) predict new momentum effects, and (iv) demonstrate that the predictions hold in the data.
The paper begins by documenting intuitive examples in which an increase in the firm’s value (good news) is followed by an increase in expected returns. This can be thought of as ‘return-momentum’ at the single firm level. We investigate this by modeling a firm with production, operating costs, expansion and abandonment options. Numerical analysis suggests that the options component and costs dominate the effects mentioned. Real options contribute to return-momentum while firm leverage (financial or operational) can reverse return-momentum and even lead to reversals.
We follow this analysis with a simulation of an economy populated by many firms. This allows us to construct momentum portfolios: these are portfolios that are long on recent winners and short on recent losers. Historically, portfolios constructed using this strategy have proven quite profitable and their profitability has hitherto been seen as unexplainable by standard efficient market arguments. We show that we can calibrate our model economy to exhibit (i) realistic momentum portfolio profits, and (ii) momentum profits that are higher when the portfolio consists only of high revenue-volatility, low book-to-market, or low-cost firms. The additional prediction made by our model are, amazingly enough, borne out in data.
SUMMARY: The Breeden-Lucas-Rubinstein model of a representative agent with time-separable expected utility does not appear to match observed asset return dynamics (Mehra and Prescott, 1985). Given the historically smooth aggregate consumption time-series, a preference-based explanation of asset returns requires more volatile state-prices than is afforded by time-separable expected utility. It is now recognized that one needs to introduce either path dependence at the representative agent level, or non-stationarity in order to capture the magnitudes and dynamics of asset-pricing moments (see the papers of Campbell and Cochrane (1999), Barberis, Huang, and Santos (2001)) Gordon and St-Amour (2000), Melino and Yang (2003), and Routledge and Zin (2004)).
Our paper asks whether the extra source of volatility in state prices can arise from “taste shocks.” Moreover, we tackle this issue in a more sophisticated way than similar works in the literature that simply assume a representative agent with time-varying preference parameters (Mehra and Sah (2000), Gordon and St-Amour (2000) and Melino and Yang (2003)). Our paper examines an economy of heterogeneous agents facing individual taste shocks. Agents’ taste shocks have an idiosyncratic as well as (possibly) a common component. Because taste shocks are private, only their common components across agents can impact prices. To avoid moral hazard issues (or unrealistic degree of individual monitoring) we only allow trade on claims over aggregate demand and price-contingent events (i.e., ‘public’ events). Thus agents in our economy generally cannot fully insure against private taste shocks and markets are incomplete. Moreover, since available securities affect portfolio choice, which in turn affects prices over which agents can then contract, the equilibrium in the type of economy we consider must simultaneously determine both the equilibrium mix of tradeable assets and their prices.
In an economy of heterogeneous agents, we characterize the appropriate equilibrium concept when contingent claims can only trade on price- or aggregate demand-contingent events. To my knowledge, this is quite an original equilibrium concept – the only other macro asset pricing model that endogenizes the asset mix in an incomplete market setting is an, unfortunately, relatively uncited paper by Kerry Back in Economic Theory. Surprisingly, one can aggregate across our agents’ demand for securities if their preferences are sufficiently homothetic. In this case, we establish the existence of an equilibrium for a class of parametric models. We then calibrate the model to fit a litany of stylized facts. The overall conclusion of the paper is that both taste shocks and non-standard preference over taste-shock risk can yield a model consistent with asset returns data.
Summary: A key task of any commercial organization is the choice of its asset configuration. This choice cannot be made without the generation and evaluation of alternatives. Asset valuation is usually a key aspect of the evaluation process. For several decades in the energy industry, the most common form of asset valuation for these purposes has been a style of Discounted Cash-Flow (DCF) analysis, which, in this paper, we call the Standard DCF approach. However, over the past five years in particular, an increasing number of organizations in the upstream petroleum and electrical generation industries, among others, have been experimenting with the use of another approach. The most common term now for this is Real Option Valuation (ROV), although, for reasons that are made clear in Section 1.5, we prefer the more general term: Modern Asset Pricing (MAP). At this point, the future role of MAP is not clear, and there is much fundamental work that remains to be done in developing MAP technology. However, there is enough activity and interest in the use of MAP methods, particularly in the energy industry, that it is appropriate to undertake a selective review of what is known publicly, and what remains to be done, on this topic. This review builds on a special issue of The Energy Journal on the topic of “The Potential for Use of Modern Asset Pricing Methods for Upstream Petroleum Project Evaluation”, guest edited by Laughton (Laughton 1998). In this introductory section, we address the following questions, with reference, where needed, to The Energy Journal issue.
1) What is wrong with Standard DCF? (It is costly to introduce new valuation techniques, so there has to be a reason for doing so.)
2) How does MAP overcome some of the deficiencies of Standard DCF?
3) What ideas are behind the MAP approach to asset valuation?
4) How are MAP analyses done?
5) Why we are reviewing MAP rather than restricting our attention to Real Option Valuation (ROV), which is the focus of almost all of the attention and writing in this area?
Unpublished Manuscripts
ABSTRACT: Based on a dynamic model of informed asset allocation, we identify a holdings-based measure of a manager's forecast for future market returns. The model also predicts that the variance of this measure should be indicative of a manager's ability: The higher the variance, the greater the ability. We test these predictions on a large dataset of mutual fund domestic equity holdings and find strong evidence that, across mutual fund managers, our holdings-based measure appears to contain information for future market returns. Moreover, as the model predicts, where timing ability can be detected the variability of our measure is positively related to timing ability.
ABSTRACT: We find evidence that public firm disclosure, in the form of Management Discussion and Analysis (Sections 7 and 7a of annual reports), is more informative about the firm's future risk following the passage of the Sarbanes-Oxley Act of 2002. Employing a novel text regression, we are able to predict, out of sample, firm return volatility using the Management Discussion and Analysis section from annual 10-K reports (which contains forward-looking views of the management). Using the relative performance of the text model as a proxy for the informativeness of reports, we show that the MD&A sections are significantly more informative after the passage of SOX. We further show that this additional information is associated with a reduction in share illiquidity, suggesting that the information divulged was new to investors. Finally, we find that the increase in informativeness of MD&A reports is most pronounced for firms with higher costs of adverse selection.
ABSTRACT: We investigate a general multiple security equilibrium model in which firms adjust their capital stock in response to economic shocks. Asset values are determined by competitive risk-averse investors. When corporate capital increases in value, firms react by creating more of it. This leads to additional risk that must be borne by investors. Overall, the model generates a VAR(1) structure for the state variables determining the cross-section of expected returns, and is broadly consistent with stylized facts (e.g., the value premium, size premium, earnings momentum, and investment premium). In addition, the paper tests a new prediction of the model and finds support for it in the data.
ABSTRACT: We present a model that captures the tendency of real rates to switch between regimes of high versus low level and volatility, the general shape of the term structure in either regime, the relative frequency of the regimes, and the time varying risk premium associated with the yield curve. We do this by supplementing a pure endowment economy model with a simple constant returns to scale technology. The characteristics of the resulting equilibrium shift between those of a pure endowment and production economy. The shift induces endogenous regime switching in the real interest rate. Among the specifications we consider, combining a linear habit formation endowment economy with risk-free production appears to explain the broadest set of stylized facts.
ABSTRACT: A real options model of resource extraction is considered where management controls both the extraction rates as well as the quality of extracted material earmarked for processing into final product. The minimum quality of material acceptable for processing is called the cutoff grade. If the cutoff is too high, much of the extracted material will go to waste exacting an opportunity cost. If the cutoff is too low, then input capacity constraints will be fully utilized, but with poor revenues at output. The optimal strategy finds a balance between these considerations. It is found that the opportunity costs associated with extracting the marginal unit of resource play a crucial role in finding the optimal strategic balance. Particularly important is the realization that such opportunity costs have a structure analogous to a European call option on the commodity in question. The model is developed in the presence of input and output processing stage constraints, and the set of candidate optimal policies is identified. Simulations are then used to expound on the nature of the optimal policies and the important role played by the opportunity costs.