Working Papers:
This paper develops extremum estimation and inference results for nonlinear models with very general forms of potential identification failure when the source of this identification failure is known. We examine models that may have a general deficient rank Jacobian in certain parts of the parameter space, leading to an identified set that is a submanifold of the parameter space. We examine standard extremum estimators and Wald statistics under a comprehensive class of parameter sequences characterizing the strength of identification of the model parameters, ranging from nonidentification to strong identification. Allowing for a general singular Jacobian as the limiting point of weak identification allows us to study estimation and inference in many models to which previous results in the weak identification literature do not apply. Using the asymptotic results, we propose two hypothesis testing methods that make use of a standard Wald statistic and datadependent critical values, leading to tests with correct asymptotic size regardless of identification strength and good power properties. Importantly, this allows one to directly conduct uniform inference on lowdimensional functions of the model parameters, including onedimensional subvectors. The paper focuses on three examples of models to illustrate the results: sample selection models, models of potential outcomes with endogenous treatment and threshold crossing models.
* This paper is motivated by my earlier working paper titled as “Identification and Inference in a Bivariate Probit Model With Weak Instruments” (2009) (Slides for the latter paper are available upon request)
“Multiple Treatments with Strategic Interaction” [Draft coming soon!]
We develop an empirical framework in which we identify and estimate the effects of treatments on a particular outcome when the treatments are results of strategic interaction. We consider a model where agents play a discrete game with complete information whose equilibrium actions (i.e., binary treatments) determine an outcome of interest in a nonseparable model with endogeneity. Due to the multiplicity of equilibria in the first stage, the model as a whole is incomplete. Without imposing parametric restrictions or large support assumptions, we partially identify the average treatment effects (ATE's). Excluded variables and nonparametric shape restrictions on the outcome function and payoff functions enable us to derive tight bounds. With an additional assumption that excluded variables have a rectangular support, we derive sharp bounds. Point identification is achieved when excluded instruments have full support.
This paper analyzes the problem of weak instruments on identification, estimation, and inference in a simple nonparametric model of a triangular system. The paper derives a necessary and sufficient rank condition for identification, based on which weak identification is established. Then nonparametric weak instruments are defined as a sequence of reduced form functions where the associated rank shrinks to zero. The problem of weak instruments is characterized to be similar to the illposed inverse problem, which motivates the introduction of a regularization scheme. The paper proposes a penalized series estimation method to alleviate the effects of weak instruments. The rate of convergence of the resulting estimator is given, and it is shown that weak instruments slow down the rate and penalization derives a faster rate. Consistency and asymptotic normality results are also derived. Monte Carlo results are presented, and an empirical example is given, where the effect of class size on test scores is estimated nonparametrically. This paper provides identification results for a class of models specified by a triangular system of two equations with binary endogenous variables. The joint distribution of the latent error terms is specified through a parametric copula structure that satisfies a particular dependence ordering, while the marginal distributions are allowed to be arbitrary but known. This class of models is broad and includes bivariate probit models as a special case. The paper demonstrates that having an exclusion restriction is necessary and sufficient for globally identification in a model without common exogenous covariates, where the excluded variable is allowed to be binary. Having an exclusion restriction is sufficient in models with common exogenous covariates that are present in both equations. The paper then extends the identification analyses to a model where the marginal distributions of the error terms are unknown.
“Cybersecurity Policy Designs and Evaluations: A Field Experiment and Economic Theory” with YunSik Choi, Jin Hyuk Choi, Shu He, Gene Moo Lee, and Andy Whinston (Latest Version: November 5, 2015. Revise and Resubmit, Journal of Cybersecurity)
Cyberinsecurity has been a serious threat to the world. A suboptimal cybersecurity environment is partly due to organizations' underinvestment and the lack of suitable policies. The motivation of this paper stems from related policy questions: how to develop a socially desirable cybersecurity environment; and how to design policies for governments and other organizations that can ensure a sufficient level of cybersecurity. This paper addresses these questions by exploring two mutually related themes. The first theme considers information asymmetry and peer effects; the second theme studies attackerdefender interaction and considers cyberinsurance policies. In relation to these themes, the paper designs and evaluates several cybersecurity policies via both empirical and theoretical analyses. In the first part, as a policy devise to alleviate information asymmetry and to achieve transparency in cybersecurity information sharing practice, we introduce a cybersecurity evaluation agency along with regulations on information disclosure. To empirically evaluate the effectiveness of such institution, we conduct a largescale randomized field experiment on 7,919 U.S. organizations. Specifically, we generate organizations' security reports based on outbound spam and industry peer rankings, then share the reports with the subjects in either private or public ways. We find evidence that the security information sharing combined with a publicity treatment has significant effects on spam reduction for large spammers. Moreover, significant peer effects are observed among industry peers after the experiment. As the second part of the paper, we turn to theoretical analyses and introduce economic models to conduct more comprehensive policy analyses on cybersecurity. The first model is a dynamic model that incorporates strategic interaction between defending organizations and attackers and that reveals a mechanism under which the players' actions affect security outcomes. The second model is a cyberinsurancereinsurance framework that suggests the importance of cyberinsurance and the role of governments as ultimate risk takers, in promoting cyberinsurance businesses. By computing a simple version of this model, we find that the existence of a cyberinsurance market does encourage organizations to make cybersecurity investments, provided that the organizations underestimate losses incurred by cyberinsecurity. As applications of this model, we consider cyberinsurance for cloud computing and software validation and verification (V&V). Lastly, we propose creating a security reputation measure.
Work in Progress:
“Sensitivity Analysis in Triangular Systems of Equations with Binary Endogenous Variables” with Sungwon Lee
Publications:
“Invalidity of the Bootstrap and the m out of n Bootstrap for Confidence Interval Endpoints Defined by Moment Inequalities,” with Donald Andrews, Econometrics Journal (2009), Volume 12, pp. S172–S199. This paper analyses the finitesample and asymptotic properties of several bootstrap and m out of n bootstrap methods for constructing confidence interval (CI) endpoints in models defined by moment inequalities. In particular, we consider using these methods directly to construct CI endpoints. By considering two very simple models, the paper shows that neither the bootstrap nor the m out of n bootstrap is valid in finite samples or in a uniform asymptotic sense in general when applied directly to construct CI endpoints. In contrast, other results in the literature show that other ways of applying the bootstrap, m out of n bootstrap, and subsampling do lead to uniformly asymptotically valid confidence sets in moment inequality models. Thus, the uniform asymptotic validity of resampling methods in moment inequality models depends on the way in which the resampling methods are employed.

