Research
Research
PUBLICATIONS
"Compounding Money and Nominal-Price Illusions," with Mustafa Caglayan, Diogo Duarte and Xiaomeng Lu
Management Science, 2024
We develop a general equilibrium model in which investors simultaneously experience money and nominal-price illusions. We show that the combined effects of these illusions widen the gap between the elasticities of the earnings yield of low- and high-priced stocks relative to the nominal interest rate. Empirically, we show that the compounded effects of money and nominal-price illusions are stronger for low-priced stocks during periods of high inflation, economic downturns, and for stocks with low institutional ownership. Our findings are robust when controlling for valuation uncertainties of low-priced stocks, including idiosyncratic volatility and firm age.
"Machine Learning for Continuous-Time Finance," with Diogo Duarte and Dejanir Silva
Review of Financial Studies, 2024
We develop an algorithm for solving a large class of nonlinear high-dimensional continuous-time models in finance. We approximate value and policy functions using deep learning and show that a combination of automatic differentiation and Ito's lemma allows for the computation of exact expectations, resulting in a negligible computational cost that is independent of the number of state variables. We illustrate the applicability of our method to problems in asset pricing, corporate finance, and portfolio choice and show that the ability to solve high-dimensional problems allows us to derive new economic insights.
"Benchmarking machine-learning software and hardware for quantitative economics," with Julia Fonseca, Diogo Duarte, and Alexis Montecinos
Journal of Economic Dynamics and Control Vol. 111, Feb 2020
We investigate the performance of machine learning software and hardware for quantitative economics. We show that the use of modern numerical frameworks can significantly reduce computational time in compute-intensive tasks. Using the Least Squares Monte Carlo option pricing algorithm as a benchmark, we show that specialized hardware and software speeds the calculations by up to two orders of magnitude when compared to programs written in popular high-level programming languages, such as Julia and Matlab.
WORKING PAPERS
"Simple Allocation Rules and Optimal Portfolio Choice Over the Lifecycle," with Julia Fonseca, Aaron Goodman, and Jonathan Parker [NBER working paper version]
We develop a machine-learning solution algorithm to solve for optimal portfolio choice in a detailed and quantitatively-accurate lifecycle model that includes many features of reality modeled only separately in previous work. We use the quantitative model to evaluate the consumption-equivalent welfare losses from using simple rules for portfolio allocation across stocks, bonds, and liquid accounts instead of the optimal portfolio choices. We find that the consumption-equivalent losses from using an age-dependent rule as embedded in current target-date/lifecycle funds (TDFs) are substantial, around 2 to 3 percent of consumption, despite the fact that TDF rules mimic average optimal behavior by age closely until shortly before retirement. Our model recommends higher average equity shares in the second half of life than the portfolio of the typical TDF, so that the typical TDF portfolio does not improve on investing an age-independent 2/3 share in equity. Finally, optimal equity shares have substantial heterogeneity, particularly by wealth level, state of the business cycle, and dividend-price ratio, implying substantial gains to further customization of advice or TDFs in these dimensions.
"Global Identification with Gradient-Based Structural Estimation" with Julia Fonseca,
This paper develops a gradient-based optimization method to estimate stochastic dynamic models in economics and finance and assess identification globally. By extending the state space to include all model parameters and approximating the mapping between parameters and moments, we only need to solve the model once to structurally estimate parameters. We approximate the mapping between parameters and moments by training a neural network on model-simulated data and then use this mapping to find the set of parameters that minimizes a function of the distance between model and data moments. We show how the mapping between parameters and moments can also be used to assess identification globally, detecting issues that a local diagnostic would miss. We illustrate the algorithm by solving and estimating a dynamic corporate finance model with endogenous investment, costly equity issuance, and capital adjustment costs. In this application, our method reduces the estimation time from many hours to a few minutes.
WORKS IN PROGRESS
"Estimation of high-dimensional diffusions: A hyper-dual approach", with Diogo Duarte and Dejanir Silva
"Dissecting the Aggregate Market Elasticity", with Mahyar Kargar and Dejanir Silva
"Sectoral Reallocation and Endogenous Risk-Aversion"
In this paper I study a two-sector general equilibrium model with habit preferences and capital adjustment costs. The effective risk-aversion is endogenous and increases after negative productivity shocks. The ensuing capital reallocation from the high-risk and high-productivity sector to the low-risk and low-productivity sector amplifies the reduction in aggregate productivity and aggregate consumption. The decrease in consumption places additional upward pressure on the effective risk aversion, which further depresses productivity and consumption. The model thus proposes a simple propagation mechanism that can account for observed patterns of slow recoveries after large shocks, and matches key business cycles and asset pricing moments, such as the mean and volatility of the risk-free rate, the equity premium, and Tobin's q.