I'm a PhD candidate in Economics at Columbia University.
I work on behavioral and experimental economics, and microeconomic theory.
I am on the job market in the 2025-26 academic year.
Here is my CV.
You can contact me at ss5580@columbia.edu.
I'm a PhD candidate in Economics at Columbia University.
I work on behavioral and experimental economics, and microeconomic theory.
I am on the job market in the 2025-26 academic year.
Here is my CV.
You can contact me at ss5580@columbia.edu.
Abstract: Economic choices are often stochastic: the same person may make a different choice when facing the same alternatives repeatedly. Standard models assume that the degree of randomness reflects the size of utility differences, but choice inconsistencies could also reflect difficulty comparing alternatives. Recent studies estimate such comparison difficulty (or "complexity") by fitting functional forms to aggregate choice data under a representative agent assumption. However, aggregate data could violate standard models of random choice simply because of heterogeneity in preferences, even in the absence of variation in comparison difficulty. This paper develops a revealed preference framework, collective rationalizability, that tests for variation in comparison difficulty from aggregate data while explicitly accounting for heterogeneity. The framework characterizes whether violations of standard models can be explained by comparison difficulty alone, heterogeneity alone, or require both. I provide a statistical test with finite-sample inference and apply the method to two existing experiments. In both cases, heterogeneity alone explains observed failures of stochastic transitivity well, demonstrating that comparison difficulty can be not only theoretically but also empirically confused with heterogeneity in aggregate data.
Abstract: How do individuals with possibly limited cognitive capacity approach games with large and high-dimensional strategy spaces? We define an algorithm for constructing representative subsets (or "grids") of strategies, each spanning the strategy space approximately uniformly, and we propose to model individuals as if each restricted their strategy set to a given (randomly chosen) grid. We apply the method to a Blotto-type resource allocation game which we also bring to the lab. We find a strong mismatch between the experimental data and the (unique) Nash equilibrium. Predictions over sufficiently coarse grids approach the behavioral regularities and dispersion present in the data.
Abstract: A principal decides whether to approve an agent based on a noisy signal (e.g., test scores) generated by the agent. High-quality agents can produce high signals on average at lower cost, but the realizations are subject to noise that depends on the screening technology's precision. We uncover a paradoxical "pitfall of precision": when precision is already high, further improvements reduce screening accuracy and lower the principal's welfare. This occurs because greater precision incentivizes strategic signaling from more low-quality agents, outweighing the direct benefit from improved precision. We also examine how commitment power helps mitigate this pitfall.