research

This page concerns my research publications.  For handouts from some of my research presentations, see here. 

In Progress and Forthcoming

"The Logical Firmament" (e-mail me if you'd like a current draft)

Draws attention to a group of facts that are largely unrecognized by philosophers, but are what many agents miss in cases of logical non-omniscience.  Proposes a new account of how we come to have logical and mathematical knowledge, which addresses Benacerraf-style concerns.

"What are Epistemic Standards?" (Co-authored with Laura Callahan.  Forthcoming in Attitude in Philosophy, S. Goldberg and M. Walker eds.)

Epistemic standards have become an increasingly central concept in contemporary epistemology.  But what are they?  What philosophical roles are they meant to play?  And what has to be a true of an agent for a particular set of epistemic standards to count as hers?

"Normative Modeling" (Forthcoming in Methods in Analytic Philosophy: A Contemporary Reader, J. Horvath, S. Koch, and M. Titelbaum eds.)

Asks whether a modeling methodology like the one used in the sciences could help us discover and understand normative truths.

"Disagreement and Permissiveness" (Forthcoming in the Routledge Handbook of Philosophy of Disagreement, M. Baghramian, J.A. Carter, and R. Rowland eds.)

Considers how the Uniqueness Thesis interacts with the proper response to disagreement, by dividing disagreements into three types and considering the roles of epistemic standards in each type. 

Books

Fundamentals of Bayesian Epistemology  (Oxford University Press (2022).  Volume 1 available here, Volume 2 available here.)

Book description: "Bayesian ideas have recently been applied across such diverse fields as philosophy, statistics, economics, psychology, artificial intelligence, and legal theory. Fundamentals of Bayesian Epistemology examines epistemologists' use of Bayesian probability mathematics to represent degrees of belief. Michael G. Titelbaum provides an accessible introduction to the key concepts and principles of the Bayesian formalism, enabling the reader both to follow epistemological debates and to see broader implications

"Volume 1 begins by motivating the use of degrees of belief in epistemology. It then introduces, explains, and applies the five core Bayesian normative rules: Kolmogorov's three probability axioms, the Ratio Formula for conditional degrees of belief, and Conditionalization for updating attitudes over time. Finally, it discusses further normative rules (such as the Principal Principle, or indifference principles) that have been proposed to supplement or replace the core five.

"Volume 2 gives arguments for the five core rules introduced in Volume 1, then considers challenges to Bayesian epistemology. It begins by detailing Bayesianism's successful applications to confirmation and decision theory. Then it describes three types of arguments for Bayesian rules, based on representation theorems, Dutch Books, and accuracy measures. Finally, it takes on objections to the Bayesian approach and alternative formalisms, including the statistical approaches of frequentism and likelihoodism."

Quitting Certainties: A Bayesian Framework Modeling Degrees of Belief  (Oxford University Press (2013).  Available here in paperback.  Honorable Mention for the 2015 APA Book Prize, winner of the 2014 Arlt Award from the Council of Graduate Schools.  Reviewed in: Philosophical Review, NDPR, BJPS, AJP, Economics&Philosophy.)

Book description: "Michael G. Titelbaum presents a new Bayesian framework for modeling rational degrees of belief, called the Certainty-Loss Framework. Subjective Bayesianism is epistemologists' standard theory of how individuals should change their degrees of belief over time. But despite the theory's power, it is widely recognized to fail for situations agents face every day—cases in which agents forget information, or in which they assign degrees of belief to self-locating claims. Quitting Certainties argues that these failures stem from a common source: the inability of Conditionalization (Bayesianism's traditional updating rule) to model claims' going from certainty at an earlier time to less-than-certainty later on. It then presents a new Bayesian updating framework that accurately represents rational requirements on agents who undergo certainty loss.

"Titelbaum develops this new framework from the ground up, assuming little technical background on the part of his reader. He interprets Bayesian theories as formal models of rational requirements, leading him to discuss both the elements that go into a formal model and the general principles that link formal systems to norms. By reinterpreting Bayesian methodology and altering the theory's updating rules, Titelbaum is able to respond to a host of challenges to Bayesianism both old and new. These responses lead in turn to deeper questions about commitment, consistency, and the nature of information.

"Quitting Certainties presents the first systematic, comprehensive Bayesian framework unifying the treatment of memory loss and context-sensitivity. It develops this framework, motivates it, compares it to alternatives, then applies it to cases in epistemology, decision theory, the theory of identity, and the philosophy of quantum mechanics."

Articles and Reviews

"Self-Locating Beliefs" (Co-authored with Andy Egan.  In The Stanford Encyclopedia of Philosophy, E.N. Zalta and U. Nodelman eds.)

Encyclopedia article on philosophical issues concerning an agent's beliefs about who she is, where she is, or what time it is.  Addresses philosophy of mind, philosophy of language, and formal epistemology.

Review of Hannes Leitgeb's The Stability of Belief: How Rational Belief Coheres with Probability (Mind 130 (2021): 1006–16.)

If you'd like to see a longer version of this review that contains many more details, but ultimately had to be cut by a third for publication, click here.

"The Principal Principle Does Not Imply the Principle of Indifference, Because Conditioning on Biconditionals is Counterintuitive" (Co-authored with Casey Hart.  The British Journal for the Philosophy of Science 71 (2020): 621–32.)

Shows that when Hawthorne, Landes, Wallmann, and Williamson (2017) argue that the Principal Principle implies a principle of indifference, they make the same mistaken assumption about conditioning on biconditionals that Roger White made in his objection to "dilation".  (See "Intuitive Dilation?" below.)

"Precise Credences" (In The Open Handbook of Formal Epistemology, R. Pettigrew and J. Weisberg eds.  The PhilPapers Foundation (2019).)

General introduction to the Bayesian practice of representing agents as assigning real-valued degrees of belief to claims.

"Reason without Reasons For" (Oxford Studies in Metaethics 13 (2019): 189–215.)

When a set of facts provides all-things-considered reason to draw some conclusion, is that always because at least one of the facts is a reason for that conclusion?  I provide examples in which a set supports some conclusion without any fact in the set's being a reason for.  I then assess the significance of such examples for philosophical methodology, the "reasons-first" program, and metanormative realism. 

(Please note that the pdf linked here is a corrected proof, which fixes some errors that appeared in the original print version.)

"Return to Reason" (In Higher-Order Evidence: New Essays.  M. Skipper Rasmussen and A. Steglich-Petersen eds.  Oxford University Press (2019).) 

Responds to a number of challenges to the argument of my "Rationality's Fixed Point".  Among other things, I: explain how I understand rationality; explain why I take akrasia to be irrational; provide an intuitive overview of the argument from the Akratic Principle to the Fixed Point Thesis; explain why you can't necessarily get around this argument by distinguishing the rational from the reasonable, ideal rationality from everyday rationality, or substantive from structural norms; respond to the suggestion that misleading higher-order evidence creates rational dilemmas; explain why the Fixed Point Thesis doesn't rely on an objectivist or externalist notion of rationality; dismiss complaints about agents who aren't able to "figure out" what's rational; and reconstruct and then respond to an objection that peer disagreement undermines doxastic justification.  Finally, I modify my steadfast position on peer disagreement to take into account cases in which peer disagreement rationally affects an agent's first-order opinions without affecting higher-order ones.

"When Rational Reasoners Reason Differently" (Co-authored with Matthew Kopec.  In Reasoning: New Essays on Theoretical and Practical Thinking.  M. Balcerak-Jackson and B. Balcerak-Jackson eds.  Oxford University Press (2019).)

Different people reason differently, which means that sometimes they reach different conclusions from the same evidence. We maintain that this is not only natural, but rational. In this essay we explore the epistemology of that state of affairs. First we will canvass arguments for and against the claim that rational methods of reasoning must always reach the same conclusions from the same evidence. Then we will consider whether the acknowledgment that people have divergent rational reasoning methods should undermine one’s confidence in one’s own reasoning. Finally we will explore how agents who employ distinct yet equally rational methods of reasoning should respond to interactions with the products of each others’ reasoning. We find that the epistemology of multiple reasoning methods has been misunderstood by a number of authors writing on epistemic permissiveness and peer disagreement. 

"Plausible Permissivism" (Co-authored with Matthew Kopec.)

Consider this a directors' cut of "When Rational Reasoners Reason Differently".  This version does a much more comprehensive job of surveying all the motivations and arguments that have been offered in favor of Feldman and White's Uniqueness Thesis.  Once they are disambiguated, many are found to be question-begging.  The rest we argue against, often by pointing out that the epistemology of rational disagreement has been widely misunderstood.

"One's Own Reasoning"  (Inquiry 60 (2016): 208–32.)

Argues that facts about the outcomes of one's own reasoning processes have a different evidential status than facts about the outcomes of others'.

"Self-Locating Credences" (In The Oxford Handbook of Probability and Philosophy.  A. Hájek and C.R. Hitchcock eds.  Oxford University Press (2016).)

A plea: If you're going to propose a Bayesian framework for updating self-locating degrees of belief, please read this piece first.  I've tried to survey all the extant formalisms, group them by their general approach, then describe challenges faced by every formalism employing a given approach.  Hopefully this survey will prevent further instances of authors' re-inventing updating rules already proposed elsewhere in the literature.

"The Uniqueness Thesis" (Co-authored with Matthew Kopec.  Philosophy Compass 11 (2016): 189–200.)

Surveys the burgeoning literature on Richard Feldman's Uniqueness Thesis.

"Continuing On" (Canadian Journal of Philosophy 45 (2015): 670–691.)

Considers why there's rational pressure for an agent's beliefs to remain constant over time.

"Intuitive Dilation?" (Co-authored with Casey Hart. Thought 4 (2015): 252–262.)

Roger White objects to interval-valued credence theories because they produce a counterintuitive “dilation” effect in a story he calls the Coin Game. We respond that results in the Coin Game were bound to be counterintuitive anyway, because the story involves an agent who learns a biconditional. Biconditional updates produce surprising results whether the credences involved are ranged or precise, so White’s story is no counterexample to ranged credence theories.  

"Reply to Kim's 'Two Versions of Sleeping Beauty'" (Erkenntnis 80 (2015): 1237–1243.)

In his "Titelbaum's Theory of De Se Updating and Two Versions of Sleeping Beauty", Namjoong Kim proposes a counterexample to the Certainty Loss Framework I described in Quitting Certainties.  I explain how my framework handles Kim's example, and more generally address Bayesian difficulties with selecting the right language over which to construct a credence function. 

"Rationality's Fixed Point (Or: In Defense of Right Reason)" (Winner of the 2013 Sanders Prize in Epistemology.  Recognized by The Philosopher's Annual as one of "the ten best articles published in philosophy" in 2015.  Published in Oxford Studies in Epistemology 5 (2015): 253–294.)

Starting from the premise that akrasia is irrational, I argue that it is always a rational mistake to have false beliefs about the requirements of rationality.  Using that conclusion, I defend logical omniscience requirements, the claim that one can never have all-things-considered misleading evidence about what's rational, and the Right Reasons position concerning peer disagreement.  

"How to Derive a Narrow-Scope Requirement from Wide-Scope Requirements" (Philosophical Studies 172 (2015): 535–542.)

A brief piece showing that from generally-accepted wide-scope rational requirements (including the Enkratic Principle), one can derive a narrow-scope rational requirement using standard deontic logic.

"Deference Done Right" (Co-authored with Richard Pettigrew.  Philosophers' Imprint 14 (2014): 1–19.)

We consider three formal principles for deferring to epistemic experts, inspired by three relatives of David Lewis' Principal Principle.  Asking whether each of the principles allows for epistemic modesty and whether each is consistent with updating by Conditionalization, we conclude that two should be rejected and the third may be adopted in a modified form.

"Ten Reasons to Care about the Sleeping Beauty Problem" (Philosophy Compass 8 (2013): 1003–1017.)

The Sleeping Beauty Problem attracts so much attention because it connects to a wide variety of unresolved issues in formal epistemology, decision theory, and the philosophy of science.  The problem raises unanswered questions concerning relative frequencies, objective chances, the relation between self-locating and non-self-locating information, the relation between self-location and updating, Dutch Books, accuracy arguments, memory loss, indifference principles, the existence of multiple universes, and many-worlds interpretations of quantum mechanics.  After stating the problem, this article surveys its connections to all of those areas.

"De Se Epistemology" (In Attitudes "De Se": Linguistics, Epistemology, Metaphysics.  A. Capone and N. Feit eds.  CSLI Publications (2013).)

I argue that we can settle controversies about de se degrees of belief without first settling controversies about de se content.  I do so by describing a de se updating scheme built from elements available to all theories of content.  But I then suggest that solutions to degree of belief puzzles may favor certain theories of de se content over others.

"An Embarrassment for Double-Halfers" (Thought 1 (2012): 146–151.)

"Double-halfers" think that throughout the Sleeping Beauty Problem, the agent named Beauty should keep her credence that a fair coin flip comes up heads equal to 1/2. I introduce a new wrinkle to the problem that shows even double-halfers can't keep Beauty's credences equal to the objective chances for all coin-flip propositions. This leaves no way to deny that self-locating information generates an unexpected kind of inadmissible evidence.

"Symmetry and Evidential Support" (Symmetry 3 (2011): 680–698.)

This article explains the central technical result of "Not Enough There There" (see below) in a more step-by-step, accessible fashion.  It also frames that result in terms of the language-dependence problems faced by Carnap's early confirmation theories, and briefly describes the philosophical consequences more fully explored in the latter half "Not Enough There There."

"Not Enough There There: Evidence, Reasons, and Language Independence" (Philosophical Perspectives 24 (2010): 477–528.)

Begins by explaining then proving a generalized language dependence result similar to Goodman's "grue" problem.  I then use this result to cast doubt on the existence of an objective evidential favoring relation (such as "the evidence confirms one hypothesis over another," "the evidence provides more reason to believe one hypothesis over the other," "the evidence justifies one hypothesis over the other," etc.).  Once we understand what language dependence tells us about evidential favoring, our options are an implausibly strong conception of the a priori, a hard externalism on which agents are unable to determine what their evidence favors, or a subjectivist view that makes evidential favoring relative to features of the agent.

"Tell Me You Love Me: Bootstrapping, Externalism, and No-Lose Epistemology" (Philosophical Studies 149 (2010): 119–134.)

One thing wrong with any theory of justification that generates "bootstrapping" in Vogel's gas gauge example is that it permits a no-lose investigation—an investigation that may justify a proposition but is guaranteed not to undermine it.  I give necessary and sufficient conditions for no-lose investigations then argue that they can be avoided only by a skeptic, a Closure-denier, or an internalist about justification.

"The Relevance of Self-Locating Beliefs" (Recognized by The Philosopher's Annual as one of "the ten best articles published in philosophy" in 2008.  Published in Philosophical Review 117 (2008): 555–605.  )

Formalizes and expands the traditional Bayesian framework for modeling agents' rational degrees of belief to apply to cases involving context-sensitive beliefs. Along the way, it offers a solution to the Sleeping Beauty Problem and defends that solution from alternate accounts.

"What Would a Rawlsian Ethos of Justice Look Like?" (Philosophy & Public Affairs 36 (2008): 289–322.)

A response to G. A. Cohen's argument that a prevailing "ethos" of justice would prevent a Rawlsian just society from having any income inequalities.  I suggest that Cohen's argument fails because a Rawlsian ethos would involve correlates of both of Rawls' principles of justice.

Review of David Christensen's Putting Logic in its Place (Mind 117 (2008): 677–681.)

Unpublished

"Contractualism, Chances, and Aggregation"

I propose a new way for a Scanlonian contractualist to argue that, when faced with a situation in which a number of people are threatened with the same level of harm, you should save as many people as possible from that harm.  The argument draws on a principle Sophia Reibetanz has defended for managing cases involving the chance of harm.

"De re Evidence and the Anthropic Argument for the Multiverse"

Explains why the argument from our existence to the existence of many universes fails.  After writing this, I discovered Roger White had already made the same point in his "Fine-Tuning and Multiple Universes".  Still, this piece makes that point in a simple, direct way without much mathematics.

"Does the Principal Principle Need Superbabies?"

I discuss why David Lewis framed his Principal Principle in terms of reasonable initial credence functions, then explain how the principle can be reformulated without reference to initial functions.

"An Infinitesimal Addition to Certain Frustration"

A brief response to Alan Hájek's Cable Guy Paradox.

[Page last modified 1/2024.]