We examine the impact of political instability on scientific output. To do so, we analyze the 2013 U.S. federal government shutdown’s impact on on federally funded research in Antarctica. Leveraging a novel dataset with a difference-in-differences design and exploiting the timing of the decision, we document an 11% decline in the number of publications among affected researchers, as well as altered collaboration patterns, which we corroborate with qualitative survey evidence. Together, the results suggest that even brief episodes of short-lived political instability can have enduring deleterious effects on science.
"Does Politics Permeate Science? Evidence from a Field Experiment on Political Bias in Academic Opportunity" with Jessica Khan (Northwest Florida State College). Working paper available on request.
Abstract removed for now because we're eliciting people's priors on the study!
"The Gender Ask Gap in Science" with Valentina Tartari, H.C. Kongsted, Astrid Ulv Thomsen, and Lorenzo Palladini (Stockholm School of Economics and Copenhagen Business School). Draft coming soon.
"From Ancient Centers to Modern Capitals: The Influence of Historic Civilizational Hubs on the Spatial Distribution of Population and Political Power" with Justin Cook (Tulane) and Raymond Kim (Westmont College).
--> Historic civilization hubs from a thousand years ago still shape where people live and power thrives today.
Work in Progress
RCT on The Impact of AI on Science
"Benchmarking the Future of Work: Mapping AI Progress to Occupational Exposure" solo authored. Submitted to Agents4Science Conference
Artificial intelligence is advancing at a pace once thought unimaginable, yet we still lack clear tools to understand how these breakthroughs map onto the world of work. This paper introduces a novel framework that systematically links AI benchmark progress - the scoreboards that track frontier capabilities - to the occupational tasks that define human labor. Unlike patents, surveys, or deployment data, which are often lagged, opaque, or subjective, benchmarks are transparent, replicable, and updated in near real time. Using O*NET as a bridge, we connect benchmark trajectories across domains-including language, reasoning, vision, and multimodal tasks-to 52 human abilities, and translate these into occupation-level indices of AI exposure. The result is a dynamic, task-level methodology that allows us to track and forecast where automation pressures are likely to emerge. By repositioning benchmarks from technical scoreboards to economic indicators, this study offers a fresh lens for anticipating the future of work and shaping policy responses.
"Harvard Business School Case 625-048, August 2024 "Managing Science: Perspectives from Postdocs"
--> One of the few HBS cases on scientists!!