My main areas of research are technology ethics and early modern philosophy.
In technology ethics, I work on moral questions concerning our actions and representations in digital spaces, and concerning the design and deployment of robotics and AI technologies.
In early modern philosophy, I work primarily on Leibniz's philosophy, especially his views on causation, creation, and ontological (in)dependence, and the connections between these three views.
I also have secondary research interests elsewhere in early modern philosophy, applied ethics, philosophy of games, contemporary metaphysics, and philosophy of religion.
"Deepfakes and Dishonesty"
with my co-author, Christian B. Miller
(2024, Philosophy & Technology)
(PhilPapers page with link to open access article here)
Abstract: Deepfakes raise various concerns: risks of political destabilization, depictions of persons without consent and causing them harms, erosion of trust in video and audio as reliable sources of evidence, and more. These concerns have been the focus of recent work in the philosophical literature on deepfakes. However, there has been almost no sustained philosophical analysis of deepfakes from the perspective of concerns about honesty and dishonesty. That deepfakes are potentially deceptive is unsurprising and has been noted. But under what conditions does the use of deepfakes fail to be honest? And which human agents, involved in one way or another in a deepfake, fail to be honest, and in what ways? If we are to understand better the morality of deepfakes, these questions need answering. Our first goal in this paper, therefore, is to offer an analysis of paradigmatic cases of deepfakes in light of the philosophy of honesty. While it is clear that many deepfakes are morally problematic, there has been a rising counter-chorus claiming that deepfakes are not essentially morally bad, since there might be uses of deepfakes that are not morally wrong, or even that are morally salutary, for instance, in education, entertainment, activism, and other areas. However, while there are reasons to think that deepfakes can supply or support moral goods, it is nevertheless possible that even these uses of deepfakes are dishonest. Our second goal in this paper, therefore, is to apply our analysis of deepfakes and honesty to the sorts of deepfakes hoped to be morally good or at least neutral. We conclude that, perhaps surprisingly, in many of these cases the use of deepfakes will be dishonest in some respects. Of course, there will be cases of deepfakes for which verdicts about honesty and moral permissibility do not line up. While we will sometimes suggest reasons why moral permissibility verdicts might diverge from honesty verdicts, we will not aim to settle matters of moral permissibility.
"Cheap Tactics in Competitive Gaming"
(forthcoming in Virtue Theory and Video Games: Level Up Your Character, Routledge)
(PhilPapers page with link to penultimate draft here)
Abstract: Many gamers complain about “cheap” or “cheesy” tactics in competitive play. I give an account of these complaints as moral claims expressing a negative evaluation of players’ actions and/or character. After a brief history of cheap tactics, I survey existing definitions of cheapness, arguing none are adequate. I then offer my own definition, arguing that it avoids the shortcomings of existing definitions, captures the essence of cheapness, explains the moral grounds for complaints about cheapness, and distinguishes cheapness from cheating and toxicity. In closing, I reflect on what can be done about cheapness in competitive gameplay.
Review of This is Technology Ethics: An Introduction, by Sven Nyholm, Wiley-Blackwell, 2023.
(Forthcoming, Journal of Moral Philosophy)
(PhilPapers page with download here)
"The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights"
(2023, AI and Ethics)
(PhilPapers page with download here)
Abstract: Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or even animal-like robots could condition our treatment of humans: treat these robots well, as we would treat humans, or else risk eroding good moral behavior toward humans. But then, this argument also seems to justify giving rights to robots, even if robots lack intrinsic moral status. In recent years, however, this indirect argument in support of robot rights has drawn a number of objections. In this paper I have three goals. First, I will formulate and explicate the Kant-inspired indirect argument meant to support robot rights, making clearer than before its empirical commitments and philosophical presuppositions. Second, I will defend the argument against a number of objections. The result is the fullest explication and defense to date of this well-known and influential but often criticized argument. Third, however, I myself will raise a new concern about the argument’s use as a justification for robot rights. This concern is answerable to some extent, but it cannot be dismissed fully. It shows that, surprisingly, the argument’s advocates have reason to resist, at least somewhat, producing the sorts of robots that, on their view, ought to receive rights.
"May Kantians Commit Virtual Killings?"
(2021, Ethics and Information Technology)
(PhilPapers page with download here)
Abstract: Are acts of violence performed in virtual environments ever morally wrong, even when no other persons are affected? While some such acts surely reflect deficient moral character, I focus on the moral rightness or wrongness of acts. Typically it’s thought that, on Kant’s moral theory, an act of virtual violence is morally wrong (i.e., violate the Categorical Imperative) only if the act mistreats another person. But I argue that, on Kant’s moral theory, some acts of virtual violence can be morally wrong, even when no other persons or their avatars are affected. First, I explain why many have thought that, in general on Kant’s moral theory, virtual acts affecting no other persons or their avatars can’t violate the Categorical Imperative. For there are real world acts that clearly do, but it seems that when we consider the same sorts of acts done alone in a virtual environment, they don’t violate the Categorical Imperative, because no others persons were involved. But then, how could any virtual acts like these, that affect no other persons or their avatars, violate the Categorical Imperative? I then argue that there indeed can be such cases of morally wrong virtual acts—some due to an actor’s having erroneous beliefs about morally relevant facts, and others due not to error, but to the actor’s intention leaving out morally relevant facts while immersed in a virtual environment. I conclude by considering some implications of my arguments for both our present technological context as well as the future.
"High Fives and Pre-Established Harmony: Leibniz’s “A New System of Nature”"
(2024, The Philosophy Teaching Library)
(PhilPapers page with link to open-access article here)
Abstract: One of Gottfried Wilhelm Leibniz’s most distinctive philosophical theories is the pre-established harmony, his big-picture explanation for the appearance of causal interaction in the world. According to Leibniz, and despite how it seems, neither you, me, nor any other thing created by God can causes changes in any other thing! When I high-five you, it’s not really me that causes the stinging sensation in your hand. Instead, every change each created thing undergoes—including that sting in your hand—is actually, surprising at it might seem, caused internally by that thing itself. But then, why does the world appear to be a place of orderly, law-like pushings and pullings, actions and reactions? This is because God chose to create only those things that would each, of its own internal nature, cause changes in itself in perfect harmony with what every other created thing causes in itself, sort of like a symphony of automata programmed and synchronized to play their own individual parts of the same overall composition in perfect, well, harmony. Leibniz offers perhaps his best-known statement of the pre-established harmony in “A New System of Nature” (1695), where he also argues that we ought to accept his theory instead of its major competitors at the time, namely, Descartes’s interactionism and Malebranche’s occasionalism. The present work is an open-access, peer-reviewed, and user-friendly version of "A New System of Nature”, intended for use in teaching courses on early modern philosophy. It's edited for clarity and accessibility, and includes commentary, examples, and other elements meant to help engage readers.
"Leibniz’s Causal Road to Existential Independence"
(2023, History of Philosophy & Logical Analysis)
(PhilPapers page with download here)
Abstract: Abstract: Leibniz thinks that every created substance is causally active, and yet causally independent of every other: none can cause changes in any but itself. This is not controversial. But Leibniz also thinks that every created substance is existentially independent of every other: it is metaphysically possible for any to exist with or without any other. This is controversial. I argue that, given a mainstream reading of Leibniz’s essentialism, if one accepts the former, uncontroversial interpretation concerning causal independence, then one ought also to accept the latter, controversial one concerning existential independence. This is a new way to defend the ‘existential independence’ interpretation. Moreover, this defense provides a new approach for defending the broadly ‘non-logical’ interpretive camp in the longstanding debate over Leibniz’s views on incompossibility, against perhaps the strongest objection leveled by advocates of the opposing broadly ‘logical’ interpretation.
“Leibniz’s Lost Argument Against Causal Interaction”
(2020, Ergo)
(PhilPapers page with link to open-access article here)
Abstract: It is clear that, according to Leibniz, no created substances can causally interact. And Leibniz clearly needs this to be true, since his well-known pre-established harmony—his alternative to interactionism and occasionalism—is premised upon it. So, what is Leibniz’s argument against causal interaction? Sometimes he claims that interaction between substances is superfluous; sometimes he claims that it would require the transfer of accidents, and that this is impossible. But when Leibniz finds himself under sustained pressure to defend his denial of causal interaction, those are not the reasons that he marshals in its defense. Instead, deep into his long correspondence with Burchard de Volder, he gives a different sort of argument, one that has gone nearly unnoticed by commentators and has not yet been properly understood. In part, this is because the argument develops slowly over four years of correspondence. It emerges in early 1704, but it is formulated tersely and appears murky unless understood in light of Leibniz and De Volder’s tangled exchanges. There Leibniz argues that, on his distinctive ontology of an infinity of created substances, no two created substances could possibly causally interact, for roughly the same reasons that some Cartesians like De Volder deny interaction between minds and bodies on their substance dualist ontology. In this paper I draw out this lost argument, explain it and the metaphysics on which Leibniz builds it, and untangle Leibniz and De Volder’s exchanges concerning causation from which this argument results.