Publications
A review of Knowledge: A Human Interest Story, by Brian Weatherson (forthcoming in Mind)
How Do Desires Explain Motivated Reasoning? (forthcoming in Synthese)
A motivated reasoner reasons as she does because of what she wants to believe. But what exactly does the "because" here communicate? That is, how do the motivated reasoner's desires explain why she reasons as she does? In this essay, I argue that the explanation in question is a manifesting explanation. Just as a glass breaking manifests fragility, motivated reasoning manifests a desire to (not) believe. One significant advantage of this picture is that it helps us understand why motivated reasoning is problematic. Coupled with a few minimal, independently motivated ideas, it reveals that motivated reasoning involves, roughly, a lack of concern for believing in line with the available evidence: a distinctively epistemic defect in the reasoner's will, analogous to the moral defect in the will of someone who disregard morality's demands. As a bonus, I'll suggest, this picture points towards a promising way of thinking about epistemic blameworthiness.
"Epistemic Rationality and the Value of Truth," Philosophical Review (2024)
Veritism is the idea that what makes a belief epistemically rational is that it is a fitting response to the value of truth. This idea promises to serve as the foundation for an elegant and systematic treatment of epistemic rationality, one that illuminates the importance of distinctively epistemic normative standards without sacrificing extensional adequacy. But I do not think that veritism can fulfill this promise. In what follows, I explain why not, in part by showing that three radically different developments of veritism---one consequentialist, one deontological, and one virtue-theoretic---face eerily similar problems. I also attempt a general explanation of why any version of veritism is doomed to fail. If my arguments are successful, their upshot is that we must look beyond the value of truth if we want to understand the nature and significance of epistemic rationality.
"Arbitrary Switching and Concern for Truth," Synthese (2023)
This essay is about a special kind of transformative choice that plays a key role in debates about permissivism, the view that some bodies of evidence permit more than one rational response. A prominent objection to this view contends that its defender cannot vindicate our aversion to arbitrarily switching between belief states in the absence of any new evidence. A prominent response to that objection tries to provide the desired vindication by appealing to the idea that arbitrary switching would involve a special kind of transformative choice: the choice to change one's epistemic standards, i.e., one's commitments regarding the relative importance of achieving true belief and avoiding false belief. My first aims here are to argue that this response is unsuccessful and propose an alternative. My secondary aim is to consider how this discussion might bear on more general debates about transformative choice.
"The Interests behind Directed Doxastic Wrongs," Analysis (2023)
Very often, when a piece of doxastic activity seems morally wrong---think of racist beliefs, unfair dismissals of testimony, and unfounded suspicions---it also seems to wrong someone in particular. This suggests that we have something at stake in how others think about us. But what, exactly? According to a view that is commonly assumed in the literature on doxastic wronging, your doxastic obligations towards others stem in part from their interests in your having (or not having) particular beliefs. I argue that this view is wrong, and I show how this result helps us understand what our directed doxastic duties are and what it takes to fulfill them.
"Doxastic Wronging and Evidentialism," Australasian Journal of Philosophy (2023)
It is a piece of commonsense that we can be mean-spirited, cruel, and unfair in the ways that we form beliefs. More generally, we can wrong others through our doxastic activity. This fact shows that, contrary to an increasingly widespread view in the ethics of belief literature, morality has a role to play in guiding doxastic deliberation and evidence is therefore not the only “right kind of reason” for belief. But the mere existence of doxastic wronging does not tell us anything about how exactly morality enters into doxastic deliberation. These two lessons are crucial for getting debates in the ethics of belief back on the right track.
"Epistemic Coercion," Ethics (2021)
Selected for a PEA Soup discussion forum.
In cases of so-called self-gaslighting, a person starts out with a certain belief---for instance, the belief that she has been sexually harassed---but then begins to worry that other people will be skeptical of it. Prompted in some way by this worry, she scrutinizes her original belief and ultimately gives it up. This kind of self-doubt is sometimes presented as one of the characteristic harms that women face under sexism. But it is difficult to say what exactly the harm could consist in, especially since the reasoning that we describe as self-gaslighting is in some cases perfectly rational. I suggest that self-gaslighting is morally problematic because it involves a coerced change in the way that the subject structures her inquiry.
In Progress
Metaepistemology and the Value Problem (under review, draft available upon request)
What makes knowledge more valuable than mere true belief? This is the so-called \emph{value problem}. Some epistemologists think that providing a solution to this problem is a key desideratum for theories of knowledge. One simple argument for this position starts with the claim that knowledge is intuitively more valuable than true belief. A more subtle and ambitious argument is rooted in the idea that theorists about knowledge must explain why knowledge is distinctively valuable, on pain of embarrassment; for without such an explanation, the thinking seems to go, epistemologists must confront the awkward possibility that they are spending time and energy on an unworthy subject. In what follows, I flesh out each of the arguments sketched above in a number of ways, and I argue that none of these versions succeed. The moral is that the value problem does not have the metaepistemological significance that is sometimes attributed to it.
Transparency and Rationality (under review, draft available upon request)
Transparency claims that you can only treat a consideration as a reason for or against believing p if you see it as bearing on the question whether p. I object to Transparency on the grounds that it conflicts with two plausible principles of structural rationality: Enkratic Guidance, which says that if you believe that you lack sufficient evidence to believe p, you are rationally required to treat that fact (as you see it) as a reason against believing p; and No Faultless Dilemmas, which says that there are no rational dilemmas without prior rational mistake. Whether or not this objection succeeds, my discussion carries an important lesson for both fans and critics of Transparency: To see all of Transparency's commitments, we need to think about it in connection with candidate non-evidential reasons for belief that are also non-practical, and we need to think about its implications for structural rationality.
Why Settle? (for a volume edited by Jonathan Ichikawa)
Ameliorative Inquiry: What Is It? What Do We Want It to Be?