Peer-Reviewed Publications
'Inability, Fallibility, and the Positive Case for PAP' (Philosophical Studies, forthcoming) [draft]
The Principle of Alternative Possibilities (PAP) says you're morally responsible for something you have done only if you could have done otherwise. I appeal to the fallibility of human abilities to undermine a popular source of support for PAP.
'Graded Abilities and Action Fragility' (Erkenntnis 90, 2025) [draft | published]
In my view, we’re not infallibly able to do anything. But what is an ability’s fallibility? It’s not just about the likelihood of success if you try, since not even your abilities to try are infallible. I propose an alternative account, which deploys the notion of an action’s "fragility" (somewhat analogous to epistemic safety).
'Explanation and the A-theory' (Philosophical Studies 178, 2021) [draft | published]
Suppose it’s raining, so you grab your umbrella. A-theorists about time think that once the rain stops, the fact that it is raining no longer obtains — but then it looks like that fact can no longer explain your action. I use this point to make trouble for the A-theory. (Philosophical Studies also published a reply to this article.)
'Hume's Skeptical Definitions of 'Cause'' (Hume Studies 43:1, 2020 [backdated to 2017]) [draft | published]
Hume wrote that his Treatise “tends to give us a notion of the imperfections and narrow limits of human understanding.” I argue that this skeptical aim explains why he defines 'cause' twice in the Treatise, and does so again in the first Enquiry.
R&Rs
A paper about fallibility and risk (R&R, currently revising) [draft]
I hold that all our abilities are fallible. You're not infallibly able even to try or decide. I argue that this claim challenges our best developed approach to normative theorizing about risk, and sketch a way forward.
A paper about human-AI hybrid groups (R&R, currently revising) [paper removed in line with journal guidelines, email me for draft]
As AI systems take on larger roles in corporations and states, will these "group agents" remain apt targets of blame? I argue that the presence of AI could threaten the moral capacities that undergird group blameworthiness.
Other Papers
A paper applying human-AI hybrid groups to the "responsibility gaps" to which AI systems allegedly give rise. (In progress.)
A paper that gives a novel view of what AI-involving responsibility gaps involve, and why they're worth caring about. (In progress.)
A paper on corporate responsibility for AI decisions. (In progress, with Kenneth Silver and Ziad Elsahn.)
A paper on group fallibility and collective obligations. (In progress.)
Public-Facing Writing
'How to Hold Mixed Human-AI Groups Responsible' (Oxford Institute for Ethics in AI Blog, 2024) [link]