Published articles:
15. "A Divergence-Based Method for Weighting and Averaging Model Predictions" (2026) Proceedings of the 29th International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR.
Synopsis: This paper uses a divergence-based framework (also discussed in publication 13) to propose a new method for averaging probabilistic predictions from multiple statistical or machine learning models. The method is very general and is shown to have a small-sample advantage over rival state-of-the art model combination methods..
14. “The Bayesian and the Abductivist” (with Mattias Skipper) (2025) Nous 59 (4): 921-937.
Synopsis: We formulate a trilemma for any attempt at formalizing explanatory power within a Bayesian framework.
13. "On the Ecological and Internal Rationality of Bayesian Updating and Other Belief Updating Strategies" (2024) British Journal for the Philosophy of Science.
Synopsis: This paper uses examples and arguments to delineate the necessary and sufficient conditions under which Bayesian updating may be expected to be rationally optimal. The conditions turn out to be quite narrow.
12. "What Hinge Epistemology and Bayesian Epistemology Can Learn From Each Other" (2023) Asian Journal of Philosophy. 2 (53): 1-21.
Synopsis: This paper formalizes the basic commitments of hinge epistemology from a Bayesian point of view and argues that there are important consequences for both frameworks.
11. "Sometimes it is better to do nothing: a new argument for causal decision theory" (2023) Ergo.
Synopsis: It is often thought that the only significant difference between causal and evidential decision theory is that they give different verdicts in Newcomb-style examples, where one's choices/actions and outcomes are correlated because of a confounder. However, this paper gives examples of a very different kind and shows that causal decision theory does better on those examples.
10. "Justifying the Norms of Inductive Inference" (2022) British Journal for the Philosophy of Science 73 (1): 135-160.
Synopsis: This paper is an attempt at giving a general axiomatic characterization of probabilistic inference, which includes as special cases Bayesian inference, explanationist inference, and various other proposals.
9. "Why change your beliefs rather than your desires? Two puzzles" (2021) Analysis 81(2): 275-281.
Synopsis: It's widely thought that beliefs and desires have a different "direction of fit," so that beliefs should be changed in light of evidence whereas desires should not. This paper argues that in fact there are good grounds for thinking that it's rational to update desires as well, but that two important puzzles then arise.
8. "A Verisimilitude Framework for Inductive Inference, with an Application to Phylogenetics" (2020) British Journal for the Philosophy of Science 71 (4): 1359-1383
Synopsis: This paper argues that the law of likelihood faces problems if all the hypotheses under consideration are known to be false and proposes a framework for inductive inference where measures of evidential favoring are calibrated to context-specific measures of closeness to the truth.
7. "New Semantics for Bayesian Inference: The Interpretive Problem and Its Solutions" (2019) Philosophy of Science 86 (4): 696-718.
Synopsis: In statistical practice, models are often known to be false and the standard degree of belief interpretation of probability is therefore on shaky ground. This paper discusses two alternative interpretations of Bayesian probability.
6. "Confirmation and the Ordinal Equivalence Thesis" (2019) Synthese 196 (3): 1079-1095.
Synopsis: This paper argues that confirmation measures must be more than mere ordinal measures if they are to be used legitimately in many of the ways that philosophers have used them.
5. "Ideal Counterpart Theorizing and the Accuracy Argument for Probabilism" (with Clinton Castro) (2018) Analysis 78 (2): 207-216.
Synopsis: We introduce an example that shows that standard accuracy-based arguments for Probabilism fail, and that Probabilism turns out to be false given Pettigrew's (2016) version of the framework. We then use our discussion of Pettigrew's framework to draw an important lesson about normative theorizing that relies on the positing of ideal agents.
4. "Comment: The Inferential Information Criterion from a Bayesian Point of View" (2018) Sociological Methodology 48 (1): 91-97.
Synopsis: This paper argues that a model selection criterion introduced by Michael Schultz called the "Inferential Information Criterion" is hard to reconcile with a Bayesian perspective.
3. "Goals and the Informativeness of Prior Probabilities" (2018) Erkenntnis) 83 (4): 647-670.
Synopsis: This paper links the literatures on scoring rules, information measures, and confirmation measures, and argues that whether a confirmation measure/scoring rule/information measure/probability distribution is appropriate for an agent depends on the agent's goals.
2. "The Philosophical Significance of Stein's Paradox" (with Elliott Sober and Branden Fitelson) (2018) European Journal for the Philosophy of Science 7 (3): 411-433.
Synopsis: Stein's result and its generalizations show that it is sometimes better to pool together multiple estimates, even if there is no reason for thinking that the estimates have any connection to each other. We discuss some of the philosophical implications of this paradoxical state of affairs and argue that it underwrites a form of epistemic pragmatism.
1. "Confirmation Measures and Sensitivity" (2015) Philosophy of Science 82 (5): 892-904 (Note that the linked-to Philsci archive version differs slightly from the published version)
Synopsis: This paper gives an argument in favor of the log-likelihood measure of confirmation. However, I now prefer to view it as providing a novel quantitative characterization of the three most common Bayesian confirmation measures.
Published book review:
"Review of 'Bayesian Philosophy of Science' by Sprenger and Hartmann" (2023) Erkenntnis 88 (5): 2245-2249.
Papers under review or in progress:
Synopsis: The principle of indifference is famously language dependent, which has lead many people to reject it. This paper argues, however, that there is a principled way of solving the problem. The solution hinges on a new argument that shows that the principle of indifference maximizes an objective kind of expected accuracy.
"The Value Causal Markov Markov Condition"