The Philosophy of Artificial Intelligence Network Talks (PAINT) is a biweekly international online seminar series that connects philosophers working on normative aspects of AI, including moral and political philosophy, philosophy of science and technology, epistemology, philosophy of mind, metaphysics, and aesthetics.
All seminars are on Mondays at 8:30 am PT / 11:30 am ET / 4:30 pm London / 5:30 pm Berlin. (We use ET as the reference point, when daylight saving changes shift things)
Click here to register for the Zoom meetings
Date: January 26, 2026
Speaker: Madeleine Ransom (joint work with Nicole Menard)
Title: A Dilemma for Skeptics of Trustworthy AI
Abstract: Can AI ever be (un)trustworthy? A growing number of philosophers argue that it cannot, because AI lack some human feature deemed essential for the trust relation, such as moral agency or being responsive to reasons. Here we propose a dilemma for these skeptics. Either such theorists must hold there is either only one kind of trust (monism), or that there are multiple varieties of trust (pluralism). The first horn of the dilemma is that a monistic view of trust is implausible: no one analysis can capture all kinds of trust relationships. The second horn of the dilemma is that if such theorists adopt a pluralistic account of trust they have little reason to deny that AI is the sort of thing that can be trustworthy: while AI may fail to possess characteristics required for some kinds of trust relations, these are not necessary conditions for trustworthiness.
Date: Feb 9, 2026
Speaker: Daniel J Singer and Luca Garzino Demo
Title: The Future of AI is Many, Not One
Abstract: Generative AI is currently being developed and used in a way that is distinctly singular. We see this not just in how users interact with models but also in how models are built, how they're benchmarked, and how commercial and research strategies using AI are defined. We argue that this singular approach is a flawed way to engage with AI if we're hoping for it to support groundbreaking innovation and scientific discovery. Drawing on research in complex systems, organizational behavior, and philosophy of science, we show why we should only expect deep intellectual breakthroughs to come from epistemically diverse teams of AI models, not singular superintelligent models. Having a diverse team broadens the search for solutions, delays premature consensus, and allows for the pursuit of unconventional approaches. Developing AI teams like these directly addresses critics' concerns that current models are constrained by past data and lack the creative insight required for innovation. In the paper, we explain what constitutes genuine diverse teams of AI models, distinguishing it from current multi-agent systems, and outline how to implement meaningful diversity in AI collectives. The upshot, we argue, is that the future of transformative transformer-based AI is fundamentally many, not one.
Date: Feb 23, 2026
Speaker: Huzeyfe Demirtas
Title: (How) Does Accountability Require Explainable AI?
Abstract: Autonomous systems powered by artificial intelligence (AI) are said to generate responsibility gaps (RGs)—cases in which AI causes harm, yet no one is blameworthy. This paper has three aims. First, I argue that we should stop worrying about RGs. This is because, on the most popular contemporary theories, blameworthiness is determined at the development or deployment stage, making post-deployment outcomes irrelevant to blameworthiness. Another upshot of this argument is that questions about blameworthiness do not motivate the demand for explainable AI (XAI). Second, I distinguish blameworthiness from liability and show that blameworthiness is not necessary—nor is it sufficient—for liability. Third, I explore how AI opacity complicates identifying who caused harm—an essential step in assigning liability. However, I argue that identifying who caused the harm—even if we use opaque AI models—is within our reach and not too costly. But liability in the context of AI requires further inquiry, which again suggests that we should stop worrying about RGs and focus on liability. Two further results emerge. One, my discussion presents a framework for analyzing how accountability might require XAI. Two, if my arguments based on this framework are on the right track, XAI is of little significance for accountability. Hence, we should worry about transparency around the AI—its training, deployment, and broader sociopolitical context—not inside the AI.
For a complete list of past and upcoming seminar presentations see the Talks page.
The seminars are held on Zoom and last 60 minutes. Our seminars will typically have one of the following formats
Format 1: 30 min presentation + 30 min discussion
Format 2: two flash talks, 15 min presentation + 15 min discussion each
Kathleen Creel (Northeastern)
Sina Fazelpour (Northeastern)
Karina Vold (University of Toronto)
You can join our google group to receive regular updates on PAINT:
Schmidt Sciences AI2050 Collaboration Fund