Day 1
10.30–11.15 Andrew Fyfe
(Boğazici University)
"How to Achieve Temporally Extended Agency in Artificial Agents Using Intra-Personal Collective Agency"
What makes an artificial system a genuine agent? Most discussions focus on the synchronic features of agency—those features that belong to a system at a single point in time. This paper argues that diachronic agency—the capacity to act as a unified agent over time—is a necessary condition for artificial agents to qualify as genuine agents. Moreover, I contend that understanding diachronic agency will require importing ideas from the philosophy of collective agency. My central proposal is that temporally extended agency in an individual AI system is best modeled as a form of intra-personal group agency.
The paper begins by distinguishing thin and thick conceptions of AI agency. Thin models, such as those that rely on mere Turing Test success, fall short given their failure to require of AI systems such things as embodiment or stable representational commitments. Thick models, by contrast, risk setting the bar too high. And while my sympathies lie more with thick than thin accounts, I acknowledge that genuine agency need not require every feature that such accounts sometimes demand. Still, even thick models often fail to require a feature that would address what I take to be a crucial question: what ensures that the agent today and the agent tomorrow are part of the same agent? That is, what is required to make the AI agent temporally extended?
To answer this, I draw from theories of collective agency and team reasoning (to include the work of Margaret Gilbert, Raimo Tuomela, Pekka Mäkelä, Natile Gold, Robert Sugden, and others). I argue that the shared deliberative standpoint of diachronic agency is a special case of collective agency, where the group is spread out across time instead of space. That is, we should think of an agent extended over time as composed of multiple time-slices or sub-agents that must coordinate from a shared “we” perspective in order to act as one. This view reveals something important that current AI systems lack: they do not coordinate with their future selves. They lack the deliberative infrastructure required for temporally extended self-governance. Without this, they fall short of the rich kind of agency that we humans possess and that is often too quickly claimed on their behalf.
11.15–12.00 Strahinja Đorđević
(University of Belgrade)
"Group Agency, Artificial Intelligence, and the Epistemic Dimensions of Third-Degree Monitoring Responsibility"
In recent years, epistemic trespassing has attracted increasing attention in academic literature. Originally introduced by Nathan Ballantyne (2019), the term refers to the act of expressing opinions or making judgments on a subject within a discipline where one does not possess the necessary expertise. While initially framed as problematic, epistemic trespassing has recently been re-evaluated as a phenomenon that can yield positive outcomes (Pavličić et al. 2023). Within this framework, we may attribute to epistemic trespassers something that could be described as first-degree monitoring responsibility. This is because trespassers must possess comprehensive epistemic evidence to ensure that their judgments are epistemically permissible. Conversely, information-seekers (both experts and laymen) are assigned second-degree monitoring responsibility. This responsibility involves mitigating the potential adverse effects of unjustified epistemic trespassing, such as the dissemination of erroneous judgments. In essence, while trespassers are responsible for ensuring the validity of their own judgments, information- seekers are tasked with addressing and correcting any negative consequences arising from the spread of unverified or misleading information.
Although epistemic trespassing was initially framed around individual human agents, its implications for AI systems and group agents are becoming increasingly salient as these entities shape contemporary epistemic environments. Given their ability to act autonomously, pursue defined goals, and make impactful decisions, AI systems and group agents can function as intentional agents (List 2021), significantly impacting the dynamics of epistemic practices, and making them increasingly relevant to discussions of epistemic trespassing. However, although both types of agents may act autonomously, their detachment from human intentionality (this excludes non-autonomous social groups, where a single individual holds ultimate responsibility for decisions or judgments, rather than the group as a whole exercising group agency) and expertise challenges the extent to which traditional notions of competence and epistemic responsibility can be meaningfully applied to them.
Therefore, the main goal of this presentation is to examine whether group agents, such as organizations and corporations, as well as artificial intelligence systems, can be regarded as entities possessing third-degree monitoring responsibility. If these agents influence epistemic environments by shaping knowledge distribution and validating claims, they could potentially serve as an additional safeguard against epistemic trespassing. However, as previously implied, their disengagement from traditional human agency raises significant concerns about their capacities. Can decisions made by AI systems and group agents be considered epistemically valid without the necessary expertise, and to what extent can they be held accountable for the consequences of their epistemic actions? Due to the distinct characteristics of AI systems and group agency, which differentiate them from individual information-seekers in both their operational modes and the nature of their agency, addressing these challenges necessitates a careful structural and ontological examination. Ultimately, my objective is to evaluate whether their epistemic capacities fulfill the criteria for third-degree monitoring responsibility (which will be introduced and examined more extensively during the presentation), and, if they do not, to identify the sources of their limitations—be they structural, data-related, biased, or arising from other epistemic flaws.
Ballantyne, N. (2019). Epistemic Trespassing. Mind, 128 (510): 367–395.
List, Christian (2021) Group Agency and Artificial Intelligence. Philosophy and Technology, 34(4): 1-30.
Pavličić, J., Dimitrijević, J., Vučković, A., Đorđević, S., Nedeljković, A., & Tešić, Ž. (2023). Friend or Foe? Rethinking Epistemic Trespassing. Social Epistemology, 38(2): 249–266.
Lunch 12.00–13.30
13.30–14.15 Kayla Carnation
"Epistemic Emergence in Human and AI Systems: Metaphysical Foundations and Their Epistemic Significance"
This essay examines how emergent intelligence—the phenomenon where complex cognitive capacities arise from interactions among simpler components—is metaphysically grounded in both human collectives and artificial intelligence (AI) systems, and analyzes the epistemic consequences of these distinct groundings. Despite structural analogies between human collectives and AI systems, the epistemic features and value of the knowledge they produce significantly differ. To illuminate these differences, I first synthesize prominent philosophical accounts of collective knowledge, demonstrating that the emergent intelligence of human collectives is metaphysically grounded in individual mental states, shared intentionality, testimony, deliberation, and social epistemic norms, thereby linking their epistemic value closely to intentionality and interpretive practices. By contrast, I argue that the emergent intelligence of AI systems is metaphysically grounded in computational structures and data-processing elements operating in parallel, relying primarily on extensive data aggregation, statistical correlations, and computational inference, and notably lacking the unified conscious agency or intrinsic intentionality characteristic of human systems.
Examining these epistemic structures highlights significant differences in epistemic features such as transparency, justification, epistemic agency, and normative accountability, differences arising directly from their distinct metaphysical groundings. On the basis of these differences, I argue that neither human nor AI individually possess all the features required for robust epistemic agency: human collectives contribute intrinsic intentionality, normative understanding, and interpretative coherence, while AI systems contribute computational power, speed, and pattern-finding capabilities, though often opaquely. Metaphysically and normatively, this analysis supports integrating human collectives and AI systems into hybrid epistemic structures, ultimately demonstrating that metaphysical analysis reveals integration not merely as beneficial but epistemically required for advancing collective intelligence responsibly. Such hybrid collectives would combine the complementary strengths of each, yielding a form of emergent intelligence that is both meaningfully grounded and computationally powerful.
14.15–15.00 Pelin Kasar
(Central European University)
"The Responsibility Gap as a Problem of Tracing"
AI systems operate with increasing autonomy and play a growing role in our social lives. This raises pressing questions about moral responsibility: Who is responsible when such systems cause unjustified harm? The lack of sufficient human control or foresight in AI decision-making leads to what has been called the responsibility gap—cases where harm occurs, yet no individual seems appropriately responsible.
A parallel issue arises in the context of collective action. Certain organized collectives are often described as intentional, goal-directed agents capable of acting with a degree of autonomy. When such collectives cause unjustified harm, who is morally responsible? If no individual member has sufficient control or knowledge regarding the collective action, it seems, again, that no one can be held responsible.
In this talk, I interpret both of these apparent gaps in responsibility as instances of a more general tracing problem. According to the tracing strategy, responsibility for an outcome need not stem from the agent's immediate action but can be anchored in an earlier point at which the agent possessed the requisite knowledge and control (Vargas 2005). Many prominent theories of moral responsibility depend on some version of this idea. Tracing helps explain how agents can be held responsible in cases that depart from paradigmatic responsibility conditions. For example, consider a drunk driver who causes an accident. Although the driver lacks control at the moment of the crash, we hold them responsible because their impaired state traces back to a prior voluntary decision to drink.
However, there are cases in which tracing seems to fail—such as actions driven by unreflective habits, character traits, other non-deliberative aspects of agency (Vargas 2005), or implicit biases. I refer to these as unintentional actions. In such instances, although we may intuitively regard the agent as responsible, it is difficult to identify any earlier point at which they satisfied the conditions of control and knowledge typically required for moral responsibility.
I argue that outcomes involving AI systems and collective agents present similar challenges. By viewing these cases as instances of failed tracing, we can better understand the responsibility gap and consider how to respond to it. In the final part of my talk, I look at some proposed solutions for responsibility in unintentional actions and suggest how they might apply to AI and collective agents.
Coffee Break 15.00–16.00
16.00-16.45 Niël Conradie
(RWTH Aachen University)
"Human-Machine Interaction and Collectives—A Pragmatic Defence"
Humanity’s future will include greater integration with different kinds of technological systems, from increasingly interactive gadgets, e.g. in the domain of generative AI, to professional decision support systems and large-scale industrial machinery. In turn, most of these technologies will become increasingly autonomous in their capacities to react to human input and create adequate output according to its intended use. Doing so is likely to confront us with reasons to adapt some of our currently accepted concepts and practices. We contend that among these is included a persuasive pragmatic reason to undertake just such an adaptation in our concept of collective responsibility and the practices attendant to it. The core of this pragmatic reason lies in how it allows us to respond to the issue of possible techno-responsibility gaps, which have been at the centre of much of the attention and debate surrounding autonomous technologies.
The discussion of whether techno-responsibility gaps exist is often undertaken in more-or-less binary terms. However, our guiding hypothesis is that we ought to consider a spectrum of interactiveness as a baseline for such an assessment. Without acknowledging that many of the discussed instances of emerging techno-responsibility gaps require humans to somewhat externalize their cognitive capacities to a machine and thus enter interactive relationships, the binary understanding of responsibility gaps fails to capture the full complexities of the situation. We contend that as one proceeds along the spectrum of increasingly depthful interactivity between human and machine, individual responsibility fails to provide the tools necessary for fully accounting for the role of responsibility, and we turn to the idea of collectivizing responsibility in such cases. And crucially, since many of these cases involve a human(s) interacting with a technology, the collective at stake here is one consisting of a human agent and a sufficiently interactive machine. However, as collectives are commonly understood as resulting from exclusively agent-agent relationships, we will first have to defend the possibility of such human-machine collectives.
Therefore, we will claim that collectives can be instantiated by certain agent-non-agent relationships, with the central condition being that this relationship reaches a meaningful level of interactivity. We take a requirement for this to be that the technology in question occupies a particular space upon an Interactive-Autonomous Spectral Field. Without having to single out specific humans in the loop or consider machines as bearers of responsibility, we may consider humans and the machines they interact with as a morally-relevant collective. Thus, responsibility is not distributed to either the human(s) or the machine(s), but also does not leave a gap of unaccounted responsibility – the collective picks up the proverbial tab. It is a genuinely collective understanding of responsibility as it does not reduce to a set of individual responsibilities. Adopting this approach also avoids having to assign the machine itself any kind of responsibility in the first place, which, in line with much of the present discourse, we take to be an unacceptable proposal.
16.45–18.15 Pekka Mäkelä
(University of Helsinki)
"Institutional Agents Facing Disruptive Technologies"
The speed of progress in the development of automation, autonomously operating artificially intelligent systems, and social and industrial robotics is flabbergasting. Algorithms and robots functioning and making decisions in areas that used to be controlled by humans alone, for instance, in stock trading, medical diagnosing, and car driving are becoming ubiquitous. This development is inspiring but also raising a lot of worries. One rather generic fear concerning increasingly autonomous systems has to do with responsibility. What happens to responsibility when technology is less and less in the control of human agents? Indeed, “responsibility” has become a catch word, which politicians, company representatives, and researchers frequently use to flag that they are sensitive to moral, social, and political risks that accompany the technological change and evolution. There is a good variety of notions of responsibility, and many debates and discussions might benefit from being a bit more exact about the sense of “responsibility” employed. In this talk I will focus on moral and legal senses of responsibility. I will distinguish between two ways of understanding “formalization of responsibility”: One that tracks the ideas discussed under the generic title responsibility of AI or responsibility of robots, and the other that tracks ideas discussed under the generic title of responsible AI or responsible robotics. At the core of the former is the idea that we could formalize responsibility in the sense of capturing moral responsibility into a computer program and by that way bring about a moral agent, say a robot, capable of bearing moral responsibility pretty much in the same sense as some human beings are considered to be morally responsible. This would provide us with a neat solution to the problem of responsibility gaps. I will critically evaluate the fruitfulness of this sense of formalizing responsibility, my critical argument builds in part on Alfred Mele’s work on autonomous agency. I end up arguing in favor of an institutional interpretation of “formalising responsibility” which tracks the ideas discussed under “responsible AI”. Here I am thinking about social and institutional structures that can be identified at least to an extent in terms of constitutive rules. Some such rules create social roles and positions which can be cashed out in terms of tasks, formal tasks if they are defined by codified rules. This provides us with a sense of both prospective and retrospective responsibility being formalized. I would claim that structural institutional responsibility allocation on the basis of formal rules is the most promising approach to the problems of moral and legal responsibility created by autonomous systems. This sense of formalising responsibility leads us to study and evaluate the responsibility of human beings either individually, jointly, or collectively. In this context I will briefly discuss regulation and hard and soft ethics. At the very end, if time allows, I will introduce a down to earth way of contributing to the implementation of this sense of formalizing responsibility by way raising the institutional sensitivity to moral reasons.
Day 2
10.30–11.15 Irene Dominecale & Marco Emilio
(Istituto Universitario Salesiano Venezia)
"The Ambivalent Challenge of Tokenizing Collective Agency"
Among digital innovations, blockchain technologies (BCT) have drawn significant attention for their potential to transform economic transaction governance (Voshmgir, 2020). Digital ledgers that interact autonomously with human participants make new interactive capacities available that have attracted a rich philosophical inquiry (Jacobs, 2020). Scholars have explored BCT as narrative tools (Reijers & Coeckelbergh, 2017) or substantive metaphors (Jacobetty & Orton-Johnson, 2023), alongside metaphysical research on trustless social construction (Lipman, 2023). However, by being applied to reframe social actions (Marres, 2016), investigating these digital tools can cross the issues of collective agency and institutions. At first glance, mutual transactions through BCT minimize the role of mental states in building joint actions between anonymous subjects (Kirchschlaeger, 2023).
Regarding these issues, insights can be gleaned from empirical investigation within digital geography and information sciences, where BCT is utilized in fostering civic goods through a co-design methodology (Avanzo et al., 2023). This inquiry suggests a more nuanced view of interactions between individuals, collectives, and blockchain technology. Moreover, the need for relational processes in implementing digital ledgers and clarifying the tokenization of non-economic goods underscores the intricate nature of the issue. However, a coherent intertwining of these two issues has yet to be fully developed.
From our interdisciplinary standpoint, advancing this investigative effort can significantly enhance current debates in at least two different areas. Within social ontology, it could enhance the inquiry about collective agency (Pettit, 2023) and its reliance on (digital) artifacts to make new capacities available for individuals and groups. Simultaneously, it might help clarify what kinds of affordances are peculiar to different blockchain technologies and which notions of collective agents are at stake (Allen & Potts, 2023). Analyzing the intentional design of digital artifacts is crucial for establishing criteria to evaluate the reciprocal actions of collective and artificial agents by identifying the agencies and capacities at play.
Pursuing this general goal, we will start by investigating some accounts of collective agency through the lenses of nonideal social ontology (Burman, 2023), nonideal epistemology (McKenna, 2023), and metaphysical inquiry on digital artifacts (Bailey, 2024; Turilli & Floridi, 2008), aiming at evaluating the notion of hybrid collective agent (Brouwer et al., 2021). Hence, we will scrutinize some cases of the application of BCT in civic domains. Due to its configuration, the implementation of this technology is notably complex: on an empirical level of analysis, economic, social, and moral incentives are often inextricable and demand a "reading for difference" approach (Gibson & Graham, 2008) applied to digital technologies (Certomà, 2023; Lynch, 2020; Santala & McGuirk, 2022). In a third step, drawing on the contribution of Nguyen (2020), we will sketch an articulation of the interplay between artifacts' design, participants' motivational states, and social ranking in making available new libraries of capacities through the development of different layers of agency. We will suggest that a crucial role is played by how decision-making processes are structured between experts, researchers, policymakers, and laypeople, and epistemic and social power asymmetry are managed.
We will develop the idea that trust's intentional promotion is necessary to build new collective affordances (Weichold & Thonhauser, 2020) through digital artifacts. However, diverse understandings of collective agencies and learning processes lead to different outcomes. Finally, we will outline open questions about agency transformation, tokenization, and value capture (Nguyen, 2024) in social ontology and digital studies.
11.15–12.00 Alper Güngör
(McGill University)
"AI art, collaboration, and appreciation"
AI-generated art has reignited the debate on the notion of authorship. It is largely agreed that authorship requires intentional action (Livingston 2005; Irvin 2005; Mag Uidhir 2013) and that the current AI systems, lacking mental states, cannot be authors. However, users of generative AI programs cannot easily claim sole authorship considering the relative ease and unpredictable nature of the generative AI programs. Given the difficulties surrounding authorship attribution, some artists report a sense of collaboration between the AI systems and themselves (Anscomb 2024). In this paper, I examine the sense of collaboration in the case of AI art and argue that AI art, if it falls in the purview of art, in most cases, is a result of collaboration between different agents, particularly, between AI program designers and users. After surveying various forms of collaboration, I suggest that the best way to capture the sense of collaboration at issue is co-creatorship (Reicher 2015; Bantinaki 2016). One might reject this based on the assumption that co-creatorship minimally requires communication of intentions between the parties involved (Anscomb 2024). Drawing on an analogy between AI art and video games as process art (Nguyen 2020), I argue that co-creatorship need not require direct communication of intentions between parties. In the rest of the paper, I explore some ethical and aesthetic implications of such collaboration. One crucial aesthetic implication is that the curation of the training material (images, texts, sounds, etc.) constitutes part of the work’s history of making, thus making it relevant for our artistic appreciation.
Anscomb, C. (2024). AI: artistic collaborator? AI and Society. 1-11
Bantinaki, K. (2016). Commissioning the Artwork: From Singular Authorship to Collective Creatorship. Journal of Aesthetic Education 50 (1):16-33.
Irvin, S. (2005). Appropriation and Authorship in Contemporary Art. British Journal of Aesthetics, 45(2), 123-137.
Livingston, P. (2005). Art and Intention: A Philosophical Study. Oxford: Oxford University Press
Mag Uidhir, C. (2013). Art and art-attempts. Oxford University Press.
Nguyen, C. T. (2020). Games: Agency as Art. Oxford: Oxford University Press
Reicher, M. E. (2015). Computer-generated music, authorship, and work identity. Grazer Philosophische Studien, 91, 107-130.
Lunch 12.00–13.30
13.30–14.15 Clara Reidl-Reidenstein
(University of Oxford)
"Citizens of where?
The impossible contradiction of claiming jurisdiction over LLMs"
People are making ever-more important decisions based on what large language models (LLMs) such as ChatGPT tell them to do. LLMs are taking on tasks performed by those professions who hold some of the highest responsibility jobs in our societies: lawyers, doctors, therapists, and civil service agents. These are often professions for which we have stronger malpractice law. If we act on the basis of bad legal advice, say, there is a process through which to adjudicate whether the human professional was morally (and legally) responsible, or whether they should be held liable.
I make two central claims. First, that the nature of LLMs will change the state’s relationship toward them. That is, states will want to claim jurisdiction over LLMs, if not as fully fledged moral agents, then to protect their own citizens against harm. Second, however, I argue that states will run into problems in shifting their relationship toward LLMs, because current theories of state only allow states to claim jurisdiction over embodied entities in physical space.
Why might the state’s relationship toward an entity such as an LLM change? To illustrate why this might be the case, I focus on instances in which things go wrong. The case would take the following form: an LLM gives a citizen bad advice, she acts on that bad advice, and if this were a human agent (such as a lawyer or a therapist), there would be a process to establish whether there was malpractice. In short, cases in which we want to attribute responsibility because an agent has an apt reactive attitude toward being wronged (see Strawson 1962). I argue that the current nature of LLMs triggers what is known as a ‘responsibility gap’ in the literature (Mattias 2004). A responsibility gap is triggered when ‘a minimal agent’ does X, that agent would have been held responsible if they were a person, but neither they nor anyone else is (fully) responsible for X (Himmelreich 2019). An agent is ‘minimal’ if they have intentional agency but lack moral agency. The reason this shifts the state’s relationship toward LLMs is functional. That is, states might grant LLMs legal (as opposed to moral) personhood on the basis that this would prevent harm caused to its moral citizens. In making this case, I draw a comparison between LLMs and corporate group agents, who hold the status of legal persons in all jurisdictions.
The second claim, however, is that while states can claim jurisdiction over group agents, they cannot claim jurisdiction over LLMs due to the ‘embodiment problem’. States claim jurisdiction over group agents such as corporations based on where they are headquartered or incorporated. While groups can (and often are) distributed all over the world, they choose one physical location as the centre of their activities. Granted, the physical location is often chosen for strategic reasons such as beneficial tax systems, but they are nonetheless required to conduct a significant amount of their operations in that location to generate legitimate jurisdictional claims over the agent. This is because our current theories of state trigger the need for jurisdictional rights only over agents that are physically located side by side within a territorial border. LLMs challenge this. Neither their data, programmers, owners, nor they themselves are physically located within one specific location. If the responsibility gap stands, and there is no direct link between Sam Altman or any individual programmer and what ChatGPT tells me, why then would the USA have any more claim to have jurisdictional rights over ChatGPT than Germany, say?
It is difficult to see how any given state could (legitimately) close the responsibility gap. This presents serious challenges for our current theories of state and forces us to choose either between fundamentally rethinking how we attribute responsibility, or to rethink the embodiment condition for claiming jurisdiction.
14.15–15.00 Krzysztof Posłajko
(Jagiellonian University)
"Functionalism, alien minds, paperclip maximisers, and corporate responsibility ascriptions"
The aim of this talk is to show that adopting functionalism about the mind might, paradoxically, have adverse consequences for the project of making sense of ascriptions of intentionality and responsibility to corporate agents. This is because functionalism admits the existence of alien minds and corporations might well turn out to be such alien minds.
Functionalism claims that mental states should be defined by their place in the network of connections between inputs, outputs, and other mental states (Block & Fodor 1972, Lewis 1972). This invites the idea of multiple realizability, which provides possible ground for realism about corporate mentality: prima facie, corporations, understood as structured wholes, can be candidates for realizers of mental states (List & Pettit 2011, Strohmaier 2020, Collins 2023).
However, functionalism also entails that there might alien minds, minds that possess very different functional networks, and consequently, very different cognitive and motivational structures. These differences might be so vast that it is not obvious whether the mental states of such alien minds can actually be described using the same vocabulary as typical human mental states, as discussions about ’Mad pain’ and ‘Martian pain’ attest (Lewis 1980, Schwitzgebel 2013). But even if we agree that aliens can have the same generic sorts of mental states (emotions, desires, beliefs etc.) as humans, their important feature is that they can be extremely motivationally different from ours. Take the case, discussed in the context of AI debates, of ‘paperclip maximizers’, i.e. hypothetical systems that has a main and overriding motivation to produce as many paperclips as possible (Bostrom 2014, Tubert & Tiehen 2024)
If, on functionalist grounds, we treat corporations as having intentional states, they likely will turn out to be alien minds, somewhat akin to a paperclip maximiser. A massive investment corporation like BlackRock or Vanguard probably has a cognitive system with a very ‘alien’ and rigid motivational structure, whose sole non-negotiable aim is to maximize shareholder value – a corporation with motivations akin to a psychopath (Schwenkenbecker 2024). But even less ‘psychopathic’ corporations might have a very eccentric sets of beliefs and motivations.
If corporations might be alien minds, then this possibility undercuts the idea that corporations are genuine moral agents. As the current debate about corporate moral responsibility shows, to be a genuine moral agent, one has to have a far richer set of psychological states than just beliefs and desires: one needs to have ability e.g. to feel remorse and respond to moral reasons (Tollefsen 2003, McKenna 2006, Björnsson G & Hess 2017, Shoemaker 2019). But nothing guarantees that corporations would have the right psychological profile. Their motivational and emotional structure might well turn out to be very ‘alien’. Thus, adopting functionalism makes it very likely that many actually existing corporations are unfit to be held morally responsible. This might motivate us to adopt the fictionalist view, that attributions of mentality to corporations are pretended ascriptions, rather than fallible empirical hypotheses.
Björnsson, G., & Hess, K. (2017). Corporate Crocodile Tears?. Philosophy and Phenomenological Research, 94(2), 273-298.
Block, N. and J. Fodor, 1972, “What Psychological States Are Not”, Philosophical Review, 81: 159–181.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. 1. Oxford University Press;
Collins, S. (2023). Organizations as wrongdoers: From ontology to morality. Oxford University Press.
McKenna, M. (2006). Collective Responsibility and an Agent Meaning Theory. Midwest Studies In Philosophy (Wiley-Blackwell), 30(1).
Lewis, D. (1972). Psychophysical and theoretical identifications. Australasian Journal of Philosophy, 50(3), 249-258.
Lewis, D. (1980). Mad pain and Martian pain. In The Language and Thought Series (pp. 216-222). Harvard University Press.
List, C., & Pettit, P. (2011). Group agency: The possibility, design, and status of corporate agents. Oxford University Press.
Schwenkenbecher, A. (2024). Are Corporations Like Psychopaths? Lessons On Moral Responsibility From Rio Tinto's Juukan Gorge Disaster. Business Ethics, the Environment & Responsibility.
Schwitzgebel, E. (2012). Mad belief?. Neuroethics, 5, 13-17.
Shoemaker, D. (2019). Blameworthy but Unblameable: A Paradox of Corporate Responsibility. Georgetown Journal of Law & Public Policy, 17, 897–917.
Strohmaier, D. (2020). Two theories of group agency. Philosophical Studies, 177(7), 1901-1918.
Tollefsen, D. (2003). Participant reactive attitudes and collective responsibility. Philosophical Explorations, 6(3), 218-234.
Tubert, A., & Tiehen, J. (2024). Existentialist risk and value misalignment. Philosophical Studies, 1-18.
Coffee Break 15.00–16.00
16.00–17.30 Christian List
(LMU Munich)
"Can artificial agents have free will?"
There has been much discussion of whether it makes sense to ascribe agency and responsibility to artificial entities, such as corporate entities and AI systems. But there has been much less discussion of whether such entities could have free will too. According to many traditional views, free will is a uniquely human phenomenon. I will argue that if we adopt a robust naturalistic understanding of free will, then we have good reasons to conclude that group agents and AI systems can have free will too. I will discuss the relevant account of free will and its implications. Background papers are available at https://philarchive.org/rec/LISDGA and https://philarchive.org/rec/LISCAS-3.