Below you can find a few sorts of research. At the top I have listed some recent work advancing through the publication process, its focus on social media and methodology. Below that you will find my doctoral work which focused on international relations theory, mechanistic explanations for risk preference, and philosophy of science as well as a listing of published articles. Further down you will see manuscripts gathering dust.
With Emily Thorson, Taylor Brown, Arjun Wilkins, Adriana Crespo-Tenorio, Winter Mason, Talia Stroud, Joshua Tucker, et al.
In progress, presented at APSA 2024. Part of the broader 2020 Election Project.
With Tiago Ventura, Christopher Barrie, Margaret E. Roberts, and Joshua A. Tucker (Under Review)
What happens to information environments when democracies ban social media platforms? While a large literature examines information control under authoritarianism, democratic governments have increasingly intervened in major online platforms. We study a prominent case: Brazil's 2024 national ban on the social media platform X. Using an event-study design, we estimate the causal effects of the ban and examine how partisan identity shaped responses. Drawing on a large sample of politically engaged users and ideal-point estimates of ideology, we find strong partisan asymmetries. Conservative users not aligned with the government were significantly more likely to circumvent the ban, and right-leaning news domains became markedly more prevalent on the platform. We describe this dynamic as a ``sorting ratchet": the ban segmented the digital public sphere along partisan lines, with effects that persisted even after restrictions were lifted. Platform bans in democratic settings may therefore deepen polarization and durably reshape information environments.
With Sally Sharif (Under Review)
Arguing against prior theories that democratization has no impact on income inequality, Dorsch and Maarek (2019) contend in an APSR article that democratization causes extreme income distributions to move towards a "middle ground,'' reducing inequality in highly unequal autocracies while increasing inequality in relatively egalitarian ones. Central to the study’s evidence is an instrumental variable strategy that leverages the regional share of democracies, and its interaction with initial inequality levels, to identify both the effect of democratization and the democracy–inequality interaction. We provide a critical replication of this study, making two central contributions. First, we show that the generated instrument violates the exclusion restriction and, second, even when properly constructed, the same linear function cannot be used to identify two causal effects. We then demonstrate how the same source of exogenous variation can be used to identify multiple causal effects using a generalized additive model, with non-linearities in the first stages serving as additional instruments. Across all specifications, the data do not support the middle‑ground theory proposed by the authors: neither democracy nor its interaction with initial inequality has a statistically significant effect on the Gini coefficient. Our findings are consistent with an extensive literature in economics and political science that has struggled to uncover a systematic democracy–inequality link. The replication method we employ offers a practical tool for other studies in contexts where valid instruments are scarce or the exclusion restriction is difficult to satisfy.
Committee: Bruce Bueno de Mesquita (chair), Alastair Smith, Tiberiu Dragu, Steven Brams, Alexander Skiles
International Relations, as a field of study, must overcome at least two crises if it is to attain the status of a science. The first crisis is theoretical: the primary theories and models we use - when strictly assessed - can neither predict, explain, nor be tested due to their instrumentalist nature. The second crisis is evidentiary: our theories remain weak and our methods remain feeble to the extent that instrumentalist research is subject to Meehl's Paradox. This paradox is that as the statistical power of our studies has increased we have been increasingly unable to distinguish our theories from pure gibberish. We are left in a position where we know that a large number of empirical patterns exist but do not know why they jointly obtain. In this sense the existing literature is not only isolated, fragmented, and contradictory but is seemingly caught in a cul-de-sac and unable to remove itself from this unruly state.
I try to do just that for the study of war. In the first chapter I describe these crises as well as their solution: the development of theories which exhibit the properties of logical coherence, weak realism, empirical content, and mass accommodation. In the second chapter I consider why International Relations has not yet developed such a theory, and develop the dialectical materialist understanding of theoretical development as a historical process with material preconditions. In the third chapter I describe the historical development of the theoretical motors of the remainder of the work: materialism, elite theory, and mechanistic explanations for risk preference. In the remaining three chapters I develop a number of closely related formal models to explain 28 previously known yet disconnected empirical findings within the theoretical framework. In this regard, the various phenomena discovered over the past 30 years may be understood not as distant and disconnected islands but as the sort of emergent empirical basis from which a strong theoretical superstructure may grow.
With Sally Sharif and Christian Oswald
Forthcoming at International Studies Quarterly
When researchers cannot randomly assign a treatment, as is often the case in international relations, they rely on observational data and use quasi-experimental designs with instrumental variables. Despite new advances in this area, even instruments with high first‑stage relevance (large F‑statistics) may fail the exclusion restriction if they affect the outcome through uncontrolled channels. In applied studies, the widespread use of the same instruments to explain various outcomes "collectively" invalidates them. For instance, in studying economic growth, population size has been used as an instrument for total trade, trade openness, export diversity, foreign aid, and foreign direct investment. The existence of multiple pathways through which population affects economic growth invalidates the exclusion restriction. This paper proposes an identification strategy with which non-linearities in the first stage can be exploited to (1) render otherwise invalid instruments valid, (2) increase the strength of first stage relationships, and (3) identify more than one treatment effect with the same source of exogenous variation. The approach is illustrated through simulations and applications to economic growth and democratization. We provide online resources with R code to facilitate its use in applied studies.
With Bruce Bueno de Mesquita
Published at Oxford Research Encyclopedia of International Studies
Manuscript ; Published Version
Despite a long legacy within the study of international politics, risk preference remains an understudied source of behavioral variation. This is most apparent within the study of violent conflict, which, being inherently risky, might be naturally explained by variation in the preferences of the actors involved. Rather than taking this seemingly obvious route, much of the formal theoretic literature continues to assume that the actors under consideration are either risk neutral or risk adverse. This blind spot is troubling since the effects of variables on outcomes generally reverse when risk preferences move from adverse to seeking, a generally unrecognized scope condition for many theoretical results. There are three central reasons why risk preferences have been neglected within the recent literature despite their theoretical and empirical importance. First, there is a sociological pathology within the field where the seeming obviousness of risk preference as an explanation for war has led to its lack of attention. Second, many formal applications of risk preference become quickly intractable, indicating a deficiency in the formal architecture available to modelers. Third, until recently, risk preferences have generally been assumed rather than explained, with this theoretical underdevelopment leading to intellectual discomfort in the use of the concept. Under the shadow of these problems, the study of risk preference as an explanation for war has gone through three intellectual periods. Starting in the late 1970s, the concept of risk preference was introduced to the field and applied widely to the phenomenon of war. This cumulative development abruptly ended in the early 1990s with the wide adoption of prospect theory and the undue dismissal of risk preference as a nonrationalist explanation for war. Under these conditions, the field bifurcated into two more or less isolated groups of scholars: political psychologists using nonformal versions of prospect theory and heuristic definitions of risk preference, on the one side, and rationalistic formal modelers universally assuming risk-neutral or risk-averse preferences, on the other. By the early 2000s, the wave of informal applications of prospect theory began to subside, carrying with it the use of risk preference as an explanation for war. By 2010, the concept had all but disappeared from the literature. Following this decade of silence, the concept of risk preference was reintroduced to the field in the early 2020s and has been demonstrated to explain some of the major empirical findings from 1990 to 2020. This reintroduction holds the potential for providing unified theoretical foundations for increasingly wide swaths of the conflict literature and may provide a rich basis for the derivation of novel empirical implications.
Published at Journal of Theoretical Politics
Manuscript ; Published Version
Over the past 30 years empirical international relations has discovered a number of conflict patterns which are variously considered to be competing, contradictory, or emanating from unique processes. I present a simplified and corrected selectorate model of war which unifies four such lines of research: the autocratic, democratic, and capitalist peaces with diversionary war. It is shown that domestic political competition, as understood within the selectorate approach, contains microfoundations for context conditional risk preference as a rationalist explanation for war. This novel mechanism, in turn, coherently explains the main findings from these various areas of enquiry. And so the discoveries of these four lines of enquiry can be understood not as apparently accidental or competing patterns but as aspects of the same mechanism operating under different empirical contexts.
Published at International Studies Quarterly
Manuscript ; Published Version
Over the past decade a growing literature has re-examined the relationship between material scarcity and conflict. Despite increasing policy salience and empirical interest, coherent theoretical accounts remain underdeveloped. This article develops microfoundations for a first-image rationalist explanation for war. It is shown that the basic physiological fact of necessary consumption induces context conditional risk preference, a feature which coherently explains empirical patterns of conflict. When applied to higher levels of analysis, the basic mechanism explains additional patterns such as the association between power preponderance and conflict and the oversized demands made by weak actors.
Professor Johnson argues, against the `standard rationale,' that common notions of `formal models as empirical hypothesis generators' are inadequate descriptions of their use, in practice, by political scientists. Rather than being used to generate predictions which then serve as empirical hypotheses to be tested against evidence from the `real world,' formal theoretic models are frequently used instead for purposes of theoretical clarification and conceptual exploration. I undermine three central pillars to Johnson's argument followed by a critique of certain pseudorationalist tendencies within PPT/EITM. A third view of the relationship between theoretical and empirical models is sketched, emphasizing the dialectical relationship between our theories, the external world and our models of each. Rather than viewing conceptual clarification as a distinct cognitive value, empirical performance and the clarification of concepts are seen as mutually constitutive aspects of scientific practice.
How are economic and policy outcomes guided by political institutions? In general, there is a consensus that democratic institutions are better at inducing a more equitable distribution of goods and services by government than non-democratic institutions. How this distribution varies within democracies, however, remains less certain. Such questions are of great normative importance, and constitute a core question in the study of political economy and comparative politics. I contribute to this line of inquiry by bringing the concepts of selectorate theory to bear on the question. Because of the transparent rules by which votes translate to the power to decide policy, these latent concepts are able to be measured. Empirically, I show that the implications of selectorate theory are useful in explaining policy outcomes in advanced democracies and that the empirical results are consistent with the selectorate explanation. Within a finite mixture model framework, I then show that an impartial Bayesian forced to decide between the selectorate model and the sum of alternatives would prefer the selectorate model roughly 91% of the time. Finally, I show that an impartial Bayesian interested only in prediction prefers the selectorate variables over the alternatives.
Manuscript. Old Slides. R Package. PolMeth 2020.
The threat of endogeneity is ubiquitous within applied empirical research. A `near Bayesian' method of sensitivity analysis is developed and implemented, overcoming a number of difficulties with existing approaches. The procedure targets the distribution of possible causal effects (DOPE) and summarizes the uncertainty of estimates to regressor-error dependencies. The procedure samples from the set of valid correlation matrices to generate the a priori distribution of causal effects under ignorance of the control function which would achieve conditional independence. This allows scholars to make probabilistic statements regarding the sensitivity of their results to arbitrary combinations of omitted variables, systematic measurement errors, selection biases, reciprocal causation, and certain SUTVA violations. Unlike existing approaches, the methodology lends itself to the comparative ‘robustness’ of studies and is easily extended. The approach is illustrated through a number of examples.