Research
Finished Manuscripts
Unconvinced Yet Influenced: Vaccine Decisions in the Shadow of Misinformation (with Mihaela Popa-Wyatt and Ed Pertwee). Forthcoming in Misinformation and Other Epistemic Pathologies, Cambridge University Press. Draft This paper examines how misinformation shapes vaccine refusal, even while maintaining dubious towards false claims. Exposure to misinformation can alter one's perceptions of vaccine's rare side-effects and their perceived credibility of authoritative sources. When individuals are uncertain about vaccine safety, these changes can lead them to refuse vaccination. We use the expected utility framework to show that such refusal, while shaped by misinformation, can still be an instrumentally rational response under uncertainty rather than a cognitive failure. We identify three mechanisms through which misinformation influences decision-making: (1) erosion of trust in authoritative sources, (2) heightened emotional responses to anecdotal risks, and (3) a combination of relatively modest shifts in both trust and side-effect perception. Our model highlights the irreversible impact of misinformation and underscores the need for proactive interventions, such as pre-bunking and media literacy, to counter its effects.
A paper on the rationality of ignoring stereotype-aligned statistical evidence (title withheld for anonymous review): This paper argues that bounded Bayesians are rationally permitted to ignore many demographic statistical studies that are aligned with social stereotypes, thereby avoiding this tension. By stereotype-aligned evidence, I mean statistical findings whose correlation with a demographic category confirms a stereotype about that category. The argument rests on two contributions. First, I develop a framework for deciding whether to engage with new information, designed for a bounded, inquisitive Bayesian who is about to make a decision under uncertainty. I argue that such an agent should engage with new information only if they are sufficiently confident that (i) learning it will change their mind about the decision they are about to make, and (ii) this learning will remain undefeated in the course of their inquiry. Second, I offer a taxonomy of how agents at different levels of statistical sophistication update on statistical evidence. I distinguish three levels: fully sophisticated, statistically literate, and statistically naive. Most laypeople fall within the latter two categories. I propose that these laypeople treat statistical evidence as expert testimony about objective chance, and I show that this testimonial model, combined with the fine-graining framework, yields rational grounds for dismissing stereotype-aligned statistics. I show that agents at all three levels are rationally permitted to ignore stereotype-aligned statistics in their decision-making. I conclude by translating these results into Buchak's language of faith, arguing that bounded Bayesians have rational grounds for maintaining faith in strangers that would be unavailable to idealized unbounded agents.
A paper on the rationality of suspending belief about politically salient science (title withheld for anonymous review): I argue that a bounded Bayesian who already maintains a high credence in a scientific claim can nonetheless be rationally required to suspend categorical belief about it when the claim is politically salient. I develop a framework for managing awareness growth grounded in expected epistemic utility: a bounded agent should expand their space of possibilities with respect to a proposition if and only if the expected accuracy gain from potential learning justifies the cognitive cost of expansion. I show that a socially aware agent is rationally compelled to expand their awareness differently for politically salient and non-political scientific claims. Using a partition-dependent probabilistic account of belief, I then show that this difference in awareness is sufficient to justify believing non-political claims while suspending judgment about politically salient ones, even when credence in both is equally high.
A paper on the continued influence effect of misinformation (title withheld for anonymous review, with Ed Pertwee and Mihaela Popa-Wyatt): The continued influence effect (CIE) is a well-documented phenomenon whereby misinformation continues to affect people's beliefs even after they receive and accept a correction. Standard explanations attribute CIE to cognitive limitations or irrational tendencies. This paper demonstrates that CIE can occur even for ideally rational Bayesian agents, suggesting that the phenomenon is not merely a product of flawed reasoning but can arise from the structure of our informational environment. We model both misinformation and corrective information as testimony from sources that are not fully trusted. To formalize partial trust, we develop an account of defeater propositions within Bayesian epistemology. We distinguish two types of defeaters: inaccuracy (the source is careless or unreliable) and indeterminacy (the source is manipulative or deceptive). While uncertainty about inaccuracy defeaters permits standard conditionalization, uncertainty about indeterminacy defeaters does not: the rigidity condition fails, requiring an alternative updating method that we introduce and develop. Corrective information takes two forms: undercutting corrections target the source of misinformation (claiming it is inaccurate or insincere), while rebutting corrections directly contradict its content. We show that an ideal Bayesian can experience CIE after receiving either type of correction. Crucially, this occurs even when the agent trusts the corrective source more than the source of misinformation: because neither source is trusted completely, both leave their mark on the agent's credences. We also show that timing matters for undercutting corrections about source manipulation: receiving such corrections before exposure to misinformation can prevent CIE in belief entirely, while corrections received afterward cannot.
Choosing Moral Arguments in Social Movements: A Rational Choice Model. How should an activist frame the message of a social movement to maximize participation? Standard collective action models treat participation as a coordination problem among rational individuals who must overcome free-rider temptations. We argue that this framing overlooks a prior question. Whether an individual perceives participation in a social movement as a collective action problem depends on their mode of moral reasoning. Those who reason deontologically may view participation as a categorical duty, independent of what others do. Those who reason consequentially may view participation as a contribution to a public good, making them vulnerable to free-rider logic. We develop a rational choice model in which activists strategically choose between deontological and consequentialist framing under uncertainty about audience composition. The model generates two hypotheses: (1) deontological framing is more effective in early-stage movements, while consequentialist framing is more effective in later-stage movements; (2) among consequentialist appeals, non-welfarist framing is more effective than welfarist framing at generating the expectation structures necessary to sustain participation. These hypotheses can be tested by future empirical research on movement communication strategies.
In Preparation
Mind Your Probability Language. The notion of probability has multiple interpretations, and one of these interpretations is the propensity interpretation. Given this possible interpretation and the opaque causal structure of the social world, it can be argued that when probabilistic statements aligned with social stereotypes are used about social groups, the propensity interpretation is implicated. Moreover, it can be argued that the propensity interpretation of conditional probabilities suggests a partially stable causal relation. So, reporting a probabilistic correlation that is aligned with social stereotypes implicates a partially stable causation about a social group. This implicature is dangerous, as in some policy-making conversations, uttering probabilistic correlations can conversationally suggest interventions that are aligned with oppressive social practices. These implicatures render statistical generalizations about social groups vulnerable to exploitation and misinterpretation, potentially perpetuating social injustice. This paper scrutinizes the pragmatics of probabilistic statements in relation to oppressive social practices. It also outlines some strategies to minimize the chance of exploitation and misunderstanding.
Longtermism and Revolution. Longtermists claim that preventing human extinction should take priority over preventing near-term harms, and they explain this by appealing to the vast number of future people affected. This paper challenges that explanation by designing a thought experiment, structured as an interventionist causal test, that contrasts the extinction case with the choice between revolution and reform under a corrupt government. I show that when the catastrophic nature of the outcome is removed while scale and ambiguity are held fixed, the clarity of the longtermist preference weakens. This establishes that the catastrophic nature of extinction, not merely its scale, is a causal contributor to the longtermist intuition, with significant implications for the scope of longtermist recommendations.