David Storrs-Fox
I am an Early Career Research Fellow at the University of Oxford's Institute for Ethics in AI, and a Junior Research Fellow in Philosophy at Jesus College, Oxford. I am also a Research Associate at the Institute for Ethics in Technology, Hamburg University of Technology (TUHH).
I received my PhD in Philosophy from NYU. I have a BPhil in Philosophy and a BA in Philosophy, Politics and Economics, both from Oxford.
I work in the philosophy of artificial intelligence (AI), moral philosophy, metaphysics, and the philosophy of action. The central idea of my recent research is that there is nothing whatsoever that agents in our world (including us, and AI agents) are infallibly able to do, no realm of actions insulated from the risk of failure. I argue that this idea has important implications for the ethics of risk, the metaphysics of abilities, and moral responsibility. My current project applies the central idea to group and AI agency, where I believe the implications are even more significant.
I am also interested in early modern philosophy (particularly Hume) and the metaphysics of time.
My CV is here, and my email address is david.storrs-fox@philosophy.ox.ac.uk. My teaching page is here.
Peer-Reviewed Publications
'Graded Abilities and Action Fragility' (Erkenntnis, 2023) [abstract | draft | published]
'Explanation and the A-theory' (Philosophical Studies, 2021) [abstract | draft | published]
Propositional temporalism is the view that there are temporary propositions: propositions that are true, but not always true. Factual futurism is the view that there are futurist facts: facts that obtain, but that will at some point not obtain. Most A-theoretic views in the philosophy of time are committed both to propositional temporalism and to factual futurism. Mark Richard, Jeffrey King and others have argued that temporary propositions are not fit to be the contents of propositional attitudes, or to be the semantic values of natural language utterances. But these discussions have overlooked another role that the A-theorist’s posits struggle to play: the role of facts in explaining other facts. Focusing on the case of action explanation by reasons, this paper presents the challenge that explanation poses for factual futurism. It then brings that challenge to bear against propositional temporalism and the A-theory more generally. My argument saddles the factual futurist with surprising commitments concerning reasons, facts and explanation. The futurist might accept those commitments and pay the price. The alternative – which I prefer – is to reject factual futurism, and with it the A-theory.
Under Review
[Paper on group agency and artificial intelligence] [email me for draft]
[Paper on the normative theory of risk] [draft]
[Paper on the Principle of Alternative Possibilities] [draft]
Other Writing
'How to Hold Mixed Human-AI Groups Responsible' (Oxford Institute for Ethics in AI Blog, 2024) [link]