The Philosophy of Artificial Intelligence Network Talks (PAINT) is a biweekly international online seminar series that connects philosophers working on normative aspects of AI, including moral and political philosophy, philosophy of science and technology, epistemology, philosophy of mind, metaphysics, and aesthetics.
All seminars are on Mondays at 8:30 am PT / 11:30 am ET / 4:30 pm London / 5:30 pm Berlin. (We use ET as the reference point, when daylight saving changes shift things)
Click here to register for the Zoom meetings
Date: April 20, 2026
Speaker: Parisa Moosavi
Title: Machine Ethics and the Challenge from Moral Particularism
Abstract: Machine Ethics is an area of research at the intersection of philosophy and artificial intelligence that aims to design and develop intelligent machines capable of complying with moral standards. One of the central challenges facing Machine Ethics concerns the difficulty of capturing morally relevant considerations in a form that a machine can reliably follow. Critics such as Purves, Jenkins, and Strawser (2015) appeal to Moral Particularism—the view that morality is a fundamentally particularized or non-principled domain—to argue against the possibility of developing morally compliant AIs. They argue that moral truths cannot be captured in the form of exceptionless general principles and thus cannot be encoded for an algorithmic machine. In this talk, I will examine the prospects of Machine Ethics in addressing this challenge while differentiating between symbolic and connectionist approaches to representing moral standards. I argue that the force of the objection depends on how Moral Particularism is understood. On the most radical version, there is no pattern in the way moral truths connect to the descriptive aspects of the world, which would make it impossible for either symbolic or connectionist AI to learn to comply with morality. In contrast, on more moderate versions of the view, there is such a pattern, but one that is either difficult to articulate via exceptionless general principles or impossible to articulate using our current moral concepts. I argue that moderate versions of moral particularism are compatible with the possibility of modeling morality using both symbolic and connectionist AI. I thus argue that there is no difference between symbolic and connectionist AI in whether they can in principle learn to comply with moral norms. However, I also argue that on the more plausible versions of moral particularism, modeling morality is very difficult to do using either symbolic or connectionist AI alone. Accordingly, to develop AIs that can comply with moral norms, our best bet would be to combine the methods and techniques of both symbolic and connectionist approaches to Machine Ethics.
For a complete list of past and upcoming seminar presentations see the Talks page.
The seminars are held on Zoom and last 60 minutes. Our seminars will typically have one of the following formats
Format 1: 30 min presentation + 30 min discussion
Format 2: two flash talks, 15 min presentation + 15 min discussion each
Kathleen Creel (Northeastern)
Sina Fazelpour (Northeastern)
Karina Vold (University of Toronto)
You can join our google group to receive regular updates on PAINT:
Schmidt Sciences AI2050 Collaboration Fund