Research

Working Papers:


Nonparametric Estimation of Triangular Simultaneous Equations Models under Weak Identification” (Latest Version: March 14, 2015. Under Review) (Matlab Codes Available)


Abstract:  This paper analyzes the problem of weak instruments on identification, estimation, and inference in a simple nonparametric model of a triangular system. The paper derives a necessary and sufficient rank condition for identification, based on which weak identification is established. Then nonparametric weak instruments are defined as a sequence of reduced form functions where the associated rank shrinks to zero. The problem of weak instruments is characterized to be similar to the ill-posed inverse problem, which motivates the introduction of a regularization scheme. The paper proposes a penalized series estimation method to alleviate the effects of weak instruments. The rate of convergence of the resulting estimator is given, and it is shown that weak instruments slow down the rate and penalization derives a faster rate. Consistency and asymptotic normality results are also derived. Monte Carlo results are presented, and an empirical example is given, where the effect of class size on test scores is estimated nonparametrically.




Identification in a Generalization of Bivariate Probit Models with Dummy Endogenous Regressors with Edward Vytlacil (Latest Version: June 9, 2015. Resubmitted, The Journal of Econometrics)

Abstract:  This paper provides identification results for a class of models specified by a triangular system of two equations with binary endogenous variables. The joint distribution of the latent error terms is specified through a parametric copula structure that satisfies a particular dependence ordering, while the marginal distributions are allowed to be arbitrary but known. This class of models is broad and includes bivariate probit models as a special case. The paper demonstrates that having an exclusion restriction is necessary and sufficient for globally identification in a model without common exogenous covariates, where the excluded variable is allowed to be binary. Having an exclusion restriction is sufficient in models with common exogenous covariates that are present in both equations. The paper then extends the identification analyses to a model where the marginal distributions of the error terms are unknown. 



“Estimation and Inference with a (Nearly) Singular Jacobian”* with Adam McCloskey (Draft coming soon)


Abstract:  This paper develops extremum estimation and inference results for nonlinear models with very general forms of potential identification failure when the source of this identification failure is known. We examine models that may have a deficient but nonzero rank Jacobian matrix in certain parts of the parameter space. We characterize weak identification in these models by examining sequences of parameters for which the parameter governing the potential identification failure drifts toward the point of identification failure as the sample size grows. This analysis leads to a “local to deficient-rank Jacobian” that does not necessarily have zero rank, allowing us to incorporate many models that have not been previously studied in the weak identification literature. In order to derive the local asymptotic theory for the estimators, the paper introduces a transformation of the parameter space as a key technical step. Asymptotic distributional results for extremum estimators are developed under a comprehensive class of identification strengths and uniformly valid inference procedures robust to identification strength are developed from these results. The paper focuses on four examples of models to illustrate the results: sample selection models, models of potential outcomes with endogenous treatment, threshold crossing models with a dummy endogenous variable, and mixed proportional hazard models.

* This paper is motivated by my earlier working paper titled as 
“Identification and Inference in a Bivariate Probit Model With Weak Instruments” (2009) (Slides for the latter paper is available upon request)




“Reexamining The Secular Trend in the Standard of Living During Industrialization in Britain”


Summary:  This note studies the long-term trend in the standard of living during the Industrial Revolution in Britain. In particular, we investigate methods and findings in Komlos's (1993) study. As a proxy for the living standard, he estimates the trend in the mean heights of subgroups of the population, and concludes that the living standard has deteriorated throughout the period. In this note, we examine his findings by analyzing the two-step procedure that Komlos employs to deal with the sample that is truncated due to institutional policies. We find that, in each step of the procedure, Komlos requires a set of strong assumptions that is not consistent with the data. We show, however, that the first step procedure is justified under weaker assumptions, which implies that the result obtained from this procedure is robust. In doing so, we develop a generalized version of the main theorem based on which Komlos employs his procedure. Despite the validity of the first step, we show that a fairly general distributional assumption that justifies the first step creates bias in the second step. We also find that, even with the same data and the two-step procedure that Komlos uses, one of his reported graphs, which is most supportive to his conclusions, cannot be replicated. Lastly, by decomposing the first and second step of Komlos's procedure, we find that his final results are mainly driven by the second step procedure. We calculate an alternative trend using a method that does not create bias in the second step, which does not present a downward trend. According to the result, the living standard during the period has not deteriorated.



Publications:


Invalidity of the Bootstrap and the m out of n Bootstrap for Confidence Interval Endpoints Defined by Moment Inequalities,” with Donald Andrews, Econometrics Journal (2009), Volume 12, pp. S172–S199.


Abstract:  This paper analyses the finite-sample and asymptotic properties of several bootstrap and m out of n bootstrap methods for constructing confidence interval (CI) endpoints in models defined by moment inequalities. In particular, we consider using these methods directly to construct CI endpoints. By considering two very simple models, the paper shows that neither the bootstrap nor the m out of n bootstrap is valid in finite samples or in a uniform asymptotic sense in general when applied directly to construct CI endpoints.
    In contrast, other results in the literature show that other ways of applying the bootstrap, m out of n bootstrap, and subsampling do lead to uniformly asymptotically valid confidence sets in moment inequality models. Thus, the uniform asymptotic validity of resampling methods in moment inequality models depends on the way in which the resampling methods are employed.