I am an Early Career Research Fellow at the University of Oxford's Institute for Ethics in AI, and a Junior Research Fellow in Philosophy at Jesus College, Oxford. I am also a Research Associate at the Institute for Ethics in Technology, Hamburg University of Technology (TUHH).
I received my PhD in Philosophy from NYU. I have a BPhil in Philosophy and a BA in Philosophy, Politics and Economics, both from Oxford.
I work in the philosophy of AI, moral and political philosophy, metaphysics, and the philosophy of action. I currently have two main research projects. The first explores the thought that there is nothing whatsoever that agents in our world (including us, and AI agents) are infallibly able to do, no realm of actions insulated from the risk of failure. In my view, this idea has important and underappreciated implications for ethics. The second relates AI to group agency, especially the agency of corporations. I argue that many of the most consequential group agents in our world will likely soon become (or be replaced by) mixed human-AI group agents. In my view, mixed group agents are different from human-only group agents in ethically significant ways. Broadly, these projects are tied together by an interest in how an agent's internal structure and composition affect its agent-level features.
I am also interested in early modern philosophy (particularly Hume) and the metaphysics of time.
My CV is here, and my email address is david.storrs-fox@philosophy.ox.ac.uk. My teaching page is here.
Peer-Reviewed Publications
'Graded Abilities and Action Fragility' (Erkenntnis, 2023) [abstract | draft | published]
'Explanation and the A-theory' (Philosophical Studies, 2021) [abstract | draft | published]
Propositional temporalism is the view that there are temporary propositions: propositions that are true, but not always true. Factual futurism is the view that there are futurist facts: facts that obtain, but that will at some point not obtain. Most A-theoretic views in the philosophy of time are committed both to propositional temporalism and to factual futurism. Mark Richard, Jeffrey King and others have argued that temporary propositions are not fit to be the contents of propositional attitudes, or to be the semantic values of natural language utterances. But these discussions have overlooked another role that the A-theorist’s posits struggle to play: the role of facts in explaining other facts. Focusing on the case of action explanation by reasons, this paper presents the challenge that explanation poses for factual futurism. It then brings that challenge to bear against propositional temporalism and the A-theory more generally. My argument saddles the factual futurist with surprising commitments concerning reasons, facts and explanation. The futurist might accept those commitments and pay the price. The alternative – which I prefer – is to reject factual futurism, and with it the A-theory.
Under Review
[Paper on group agency and AI] [removed in line with journal guidelines; email me for draft]
[Paper on the normative theory of risk] [draft]
[Paper on the Principle of Alternative Possibilities] [draft]
Other Writing
'How to Hold Mixed Human-AI Groups Responsible' (Oxford Institute for Ethics in AI Blog, 2024) [link]