Peer-Reviewed Publications
'Graded Abilities and Action Fragility' (Erkenntnis 90, 2025) [abstract | draft | published]
'Explanation and the A-theory' (Philosophical Studies 178, 2021) [abstract | draft | published]
Propositional temporalism is the view that there are temporary propositions: propositions that are true, but not always true. Factual futurism is the view that there are futurist facts: facts that obtain, but that will at some point not obtain. Most A-theoretic views in the philosophy of time are committed both to propositional temporalism and to factual futurism. Mark Richard, Jeffrey King and others have argued that temporary propositions are not fit to be the contents of propositional attitudes, or to be the semantic values of natural language utterances. But these discussions have overlooked another role that the A-theorist’s posits struggle to play: the role of facts in explaining other facts. Focusing on the case of action explanation by reasons, this paper presents the challenge that explanation poses for factual futurism. It then brings that challenge to bear against propositional temporalism and the A-theory more generally. My argument saddles the factual futurist with surprising commitments concerning reasons, facts and explanation. The futurist might accept those commitments and pay the price. The alternative – which I prefer – is to reject factual futurism, and with it the A-theory.
In 2022 Philosophical Studies published a reply to this article, by Olley Pearson (here).
Under Review
Paper on group agency and AI [removed in line with journal guidelines; email me for draft]
Paper on the normative theory of risk [draft]
Paper on the Principle of Alternative Possibilities, R&R [draft]
In Progress
Paper on corporate responsibility for AI decisions (with Kenneth Silver and Ziad Elsahn)
Paper on how to understand AI responsibility gaps, and why they might be problematic
Paper that provides a novel solution to AI responsibility gaps
Paper on collective fallibility and global obligations
Other Writing
'How to Hold Mixed Human-AI Groups Responsible' (Oxford Institute for Ethics in AI Blog, 2024) [link]