Peer-Reviewed Publications
Recent work by Alfred Mele, Romy Jaster and Chandra Sripada recognizes that abilities come in degrees of fallibility. The rough idea is that abilities are often not surefire. They are liable to fail. The more liable an ability is to fail, the more fallible it is. Fallibility is plausibly significant for addiction, responsibility, and normative theorizing. However, we lack an adequate account of what fallibility consists in. This article addresses that problem. Perhaps the most natural approach is to say (roughly) the fallibility of your ability to F is the proportion of scenarios in which you do not F, among those in which you try to F . I argue that this approach (in all plausible versions) is mistaken. I then introduce the notion of an action's “fragility,” and propose that we use that new notion to understand fallibility.
In 2022 Philosophical Studies published a reply to this article, by Olley Pearson (here).
Under Review
[Paper on group agency and AI] [removed in line with journal guidelines; email me for draft]
[Paper on the normative theory of risk] [draft]
[Paper on the Principle of Alternative Possibilities] [draft]
In Progress
Paper on corporate responsibility for AI decisions (with Kenneth Silver and Ziad Elsahn)
Paper on how to understand AI responsibility gaps, and why they might be problematic
Paper that provides a novel solution to AI responsibility gaps
Paper on collective fallibility and global obligations
Other Writing
'How to Hold Mixed Human-AI Groups Responsible' (Oxford Institute for Ethics in AI Blog, 2024) [link]