The Philosophy of Artificial Intelligence Network Talks (PAINT) is a biweekly international online seminar series that connects philosophers working on normative aspects of AI, including moral and political philosophy, philosophy of science and technology, epistemology, philosophy of mind, metaphysics, and aesthetics.
All seminars are on Mondays at 8:30 am PT / 11:30 am ET / 4:30 pm London / 5:30 pm Berlin. (We use ET as the reference point, when daylight saving changes shift things)
Click here to register for the Zoom meetings
Date: Feb 9, 2026
Speaker: Daniel J Singer and Luca Garzino Demo
Title: The Future of AI is Many, Not One
Abstract: Generative AI is currently being developed and used in a way that is distinctly singular. We see this not just in how users interact with models but also in how models are built, how they're benchmarked, and how commercial and research strategies using AI are defined. We argue that this singular approach is a flawed way to engage with AI if we're hoping for it to support groundbreaking innovation and scientific discovery. Drawing on research in complex systems, organizational behavior, and philosophy of science, we show why we should only expect deep intellectual breakthroughs to come from epistemically diverse teams of AI models, not singular superintelligent models. Having a diverse team broadens the search for solutions, delays premature consensus, and allows for the pursuit of unconventional approaches. Developing AI teams like these directly addresses critics' concerns that current models are constrained by past data and lack the creative insight required for innovation. In the paper, we explain what constitutes genuine diverse teams of AI models, distinguishing it from current multi-agent systems, and outline how to implement meaningful diversity in AI collectives. The upshot, we argue, is that the future of transformative transformer-based AI is fundamentally many, not one.
Date: Feb 23, 2026
Speaker: Huzeyfe Demirtas
Title: (How) Does Accountability Require Explainable AI?
Abstract: Autonomous systems powered by artificial intelligence (AI) are said to generate responsibility gaps (RGs)—cases in which AI causes harm, yet no one is blameworthy. This paper has three aims. First, I argue that we should stop worrying about RGs. This is because, on the most popular contemporary theories, blameworthiness is determined at the development or deployment stage, making post-deployment outcomes irrelevant to blameworthiness. Another upshot of this argument is that questions about blameworthiness do not motivate the demand for explainable AI (XAI). Second, I distinguish blameworthiness from liability and show that blameworthiness is not necessary—nor is it sufficient—for liability. Third, I explore how AI opacity complicates identifying who caused harm—an essential step in assigning liability. However, I argue that identifying who caused the harm—even if we use opaque AI models—is within our reach and not too costly. But liability in the context of AI requires further inquiry, which again suggests that we should stop worrying about RGs and focus on liability. Two further results emerge. One, my discussion presents a framework for analyzing how accountability might require XAI. Two, if my arguments based on this framework are on the right track, XAI is of little significance for accountability. Hence, we should worry about transparency around the AI—its training, deployment, and broader sociopolitical context—not inside the AI.
Date: March 9, 2026
Speaker: Mike Barnes
Title: TBD
Date: March 23, 2026
Speaker: Iwan Williams
Title: Intention-like representations in Large Language Models?
Abstract: A growing chorus of AI researchers and philosophers posit internal representations in large language models (LLMs). But how do these representations relate to the kinds of mental states we routinely ascribe to our fellow humans? While some research has focused on belief- or knowledge- like states in LLMs, there has been comparatively little focus on the question of whether LLMs have intentions. I survey five properties that have been associated with intentions in the philosophical literature, and assess two candidate classes of LLM representations against this set of features. The result is mixed: LLMs have representations that are intention-like in many—perhaps surprising—respects, but they differ from human intentions in important ways.
Date: April 6, 2026
Speaker: Jessie Hall
Title: TBD
Date: April 20, 2026
Speaker: Parisa Moosavi
Title: Machine Ethics and the Challenge from Moral Particularism
Abstract: Machine Ethics is an area of research at the intersection of philosophy and artificial intelligence that aims to design and develop intelligent machines capable of complying with moral standards. One of the central challenges facing Machine Ethics concerns the difficulty of capturing morally relevant considerations in a form that a machine can reliably follow. Critics such as Purves, Jenkins, and Strawser (2015) appeal to Moral Particularism—the view that morality is a fundamentally particularized or non-principled domain—to argue against the possibility of developing morally compliant AIs. They argue that moral truths cannot be captured in the form of exceptionless general principles and thus cannot be encoded for an algorithmic machine. In this talk, I will examine the prospects of Machine Ethics in addressing this challenge while differentiating between symbolic and connectionist approaches to representing moral standards. I argue that the force of the objection depends on how Moral Particularism is understood. On the most radical version, there is no pattern in the way moral truths connect to the descriptive aspects of the world, which would make it impossible for either symbolic or connectionist AI to learn to comply with morality. In contrast, on more moderate versions of the view, there is such a pattern, but one that is either difficult to articulate via exceptionless general principles or impossible to articulate using our current moral concepts. I argue that moderate versions of moral particularism are compatible with the possibility of modeling morality using both symbolic and connectionist AI. I thus argue that there is no difference between symbolic and connectionist AI in whether they can in principle learn to comply with moral norms. However, I also argue that on the more plausible versions of moral particularism, modeling morality is very difficult to do using either symbolic or connectionist AI alone. Accordingly, to develop AIs that can comply with moral norms, our best bet would be to combine the methods and techniques of both symbolic and connectionist approaches to Machine Ethics.
For a complete list of past and upcoming seminar presentations see the Talks page.
The seminars are held on Zoom and last 60 minutes. Our seminars will typically have one of the following formats
Format 1: 30 min presentation + 30 min discussion
Format 2: two flash talks, 15 min presentation + 15 min discussion each
Kathleen Creel (Northeastern)
Sina Fazelpour (Northeastern)
Karina Vold (University of Toronto)
You can join our google group to receive regular updates on PAINT:
Schmidt Sciences AI2050 Collaboration Fund