Abstract: I propose the Neural‐Network Efficient Estimator (NNES) for structural dynamic discrete choice models with high‑dimensional state vectors. NNES replaces grid/sieve policy evaluation with a deep network for the value function and estimates structural parameters by maximizing a penalized likelihood that enforces the Bellman equation. I prove that (i) the policy‑iteration map retains a zero Jacobian property, (ii) the resulting likelihood score is Neyman Orthogonal, and therefore (iii) the estimator is root-n consistent and semiparametrically efficient while the information matrix remains block‑diagonal. Here, n is the sample size. I provide simulation evidence showing that NNES delivers the same precision as full‑information maximum likelihood, demonstrating its attractiveness in high-dimensional settings.
2. Automatic Debiased Machine Learning for Dynamic Discrete Choice
Abstract: Numerous causal and structural effects rely on regression estimates, such as policy effects and parameter estimation in economic structural models. Such regressions may involve high-dimensional covariates, so machine learning approaches are of interest. However, putting machine learning models together with identifying equations can result in regularization and model selection bias. This paper introduces a method to automatically debias in Dynamic Discrete Choice problems. This debiasing method does not require the analytically form of the bias correction term to be given. It is applicable to all forms of regression learning techniques, including neural networks, random forests, Lasso, and other techniques available for high-dimensional data. The paper also supplies robust standard error estimations to address misspecification, rates of convergence of bias correction, and asymptotic inference conditions for basing the estimation of a range of structural effects.
3. Machine Learning for Global Dynamic Stochastic General Equilibrium Models (with Dan Cao and Wenlan Luo)
Abstract: How well can neural networks approximate equilibrium policy functions in high-dimensional, highly nonlinear dynamic stochastic general equilibrium (GDSGE) models? We benchmark state-of-the-art deep learning solvers against a global time-iteration method with adaptive sparse grids (GDSGE-ASGs) and propose an unsupervised approach that combines over-parameterized neural networks with fixed-point iteration to address issues of non-convergence and local minima. We establish a constructive equivalence between the GDSGE-ASG solution and neural networks. This mapping highlights why training neural networks is challenging: the equivalent GDSGE-ASG representation requires sparse weights of large magnitude, whereas standard deep learning methods tend to concentrate weights near zero. In the textbook RBC model with an investment irreversibility constraint, solution accuracy under deep learning improves monotonically with network width. Nevertheless, deep learning solvers remain slower and more prone to multiple solutions than GDSGE-ASG. Collectively, these findings provide concrete design principles for developing reliable, scalable global solution methods for high-dimensional DSGE models.