David J. Gunkel (Northern Illinois University): Person, Thing, Robot
Robots are a curious sort of thing. On the one hand, they are designed and manufactured technological artifacts. They are things. Yet, and on the other hand, these things are not quite like other things. They seem to have social presence. They are able to talk and interact with us. And many are designed to mimic or simulate the capabilities and behaviors that are commonly associated with human or animal intelligence. Robots therefore invite and encourage zoomorphism, anthropomorphism, and even personification. In his new book Person, Thing, Robot (MIT Press, 2023), David J. Gunkel sets out to answer the vexing question: What exactly is a robot? Rather than try to fit robots into the existing moral and legal categories by way of arguing for either their reification or personification, however, Gunkel argues for a revolutionary reformulation of the entire system, developing a new approach to technology ethics that can scale to the unique opportunities and challenges of the twenty-first century and beyond.
Minao Kukita (Nagoya University): Artificial Intelligence and the Future of Trust
The use of AI for profiling—classifying individuals and predicting their attributes or behaviors based on big data—has become one of the most profitable applications of artificial intelligence. However, such applications have been criticized for perpetuating biases and stereotypes, exacerbating the vulnerability of marginalized groups, and encouraging excessive data collection that infringes on privacy. Moreover, sociologist Shoshana Zuboff warns that the widespread adoption of AI may replace human relationships with machine-driven processes, substituting trust among people with ``certainty''. This presentation explores how the growing prevalence of AI-driven profiling impacts trust and human relationships, drawing on an analysis of the role and importance of trust in society.
Tami Yanagisawa (Kwansei Gakuin University): AI and "God"
The fact that some people see a "mind" in AI leads us to speculate that AI could eventually become an object of worship, like a "god." However, at the same time, research has shown that the more people come into contact with AI, the less religious they become. Jackson (University of Chicago) interprets this change as a process in which people living in highly automated spaces no longer need the supernatural agents that were required to solve instrumental problems. In this presentation, I will re-examine Jackson's intriguing research from religious studies and anthropology perspectives. I will also consider the similarities and differences between the relationships that humans have with other humans and with transcendent agents versus the ones they have with AI.
Sigurd Hovd (Peace Research Institute Oslo): Dawn of the Moral Dead: On the Conceivability of Artificial Moral Agents
A philosophical zombie is a physical duplicate of a conscious subject, functionally identical to this subject, but lacking any characteristics associated with qualitative consciousness or sentience. The logical conceivability of such beings, and what plausible metaphysical position one can imply therefrom, has been, and remains, a topic of much contention in contemporary analytic philosophy of mind. Recently this being of philosophical thought experiments has also entered debates on moral artificial agency. For if we realize that to think of algorithms as potential artificial moral agents is to think of a kind of moral zombie, we should also realize that a true artificial moral agent is a fundamentally incoherent notion. So Carissa Véliz has recently argued, pointing to the fact that concepts of moral autonomy and accountability, arguably essential features of what we think it means to be a moral agent, rely centrally on our conception of sentience. Sentience, according Véliz, plays an irreplaceable epistemic role in us having access to moral truths, so that no moral zombie, could in their actions be said to be guided by moral agency.
Responding to this argument, I will propose the following two claims: First, Véliz’ argument does not take into account the possibility of an artificial agent standing in a indirect epistemic relationship to moral facts; second, Véliz argument does not take into account the asymmetric set of theoretical motivations guiding accounts of artificial moral agency, vs. human moral agency.
We must take seriously the different theoretical motivations guiding our inquiries into human and artificial agency, and doing so should lead us to recognize an important limitation of traditional anti- reductionist arguments concerning human subjectivity. These arguments, upon which Véliz's position rests, are not immediately transferrable to questions concerning the possibility of artificial agency acting for the sake of normative phenomena we believe to somehow be grounded in human subjectivity, and the notion of a moral zombie is consequently far more conceivable than it may first appear. In contemplating the possibility of an artificial agent having an indirect access to our moral judgements, we ought also to be open to the possibility of it being able to make these judgements its own action, and thus exhibiting a kind of derivative moral agency. While the theoretical motivations motivating our inquiries into human moral agency gives us good reasons to reject such a form of agency as moral agency proper, I argue that these same motivations ought not to be the primary motivation guiding our inquiries into the possibility of artificial moral agency.
Kamil Mamak (Jagiellonian University): In defense of artificial suffering
One of the most discussed topics in AI ethics is the moral and legal status of robots (see, e.g. Gordon and Nyholm 2021; Müller 2020) . The discussion contains different rationales upon which it would be acceptable to include AI systems or robots into a moral circle and, in consequence, provide robots rights (for overview see Gunkel 2018) . It seems that the most common position is that sentience should be the criterion for granting robots moral and legal status (see, e.g. . Gibert and Martin 2021; Mosakas 2020) . But it is not that scholars postulate to create artificial pain. The discussion on moral status is related to the question, "What if AI/robots are sentient?." The question of whether we should create artificial pain is another issue. In that respect, it seems that the most popular view is that we should not create artificial suffering. It is considered a burden that would impose duties on humans (see, e.g. Hildt 2023; Basl 2013; Dung 2023; Mamak forthcoming) . Metzinger even postulates to create the international moratorium on synthetic phenomenology (Metzinger 2021) . However, the emergence of artificial consciousness might not be the result of intentional design (see, e.g. Chalmers 2023) . In my paper, I present arguments in defense of artificial suffering and present reasons why artificial suffering might be welcomed (as a side effect of the development) and, in some cases, even intentionally designed feature of entities. In the case of the second option, I will present the conditions upon which we should consider creating artificial suffering. I will argue that artificial pain might be a necessary condition for the successful and safe integration of artificial entities into humans' social life. I will refer to criminal law, where the possibility of feeling pain is the central tool that is seen as a feature enabling this field to operate and, by that, to protect citizens.
Shinya Oie (National Institute of Technology, Kurume College): AI Should Coexist with Humans in a Way that Promotes Human Relational Autonomy
The rapid development of research and practice in artificial intelligence (AI) has succeeded in providing functions comparable to or even surpassing human capabilities. This technological success presents a vision of a future society where AI could be more prevalent, and hence arouses anxiety. There is no doubt that a large part of our lives, including work, leisure, and political activities, are greatly affected by this technology. In such a situation, it is extremely important to consider normative ideas in advance that could guide our lives with AI in the future. How should humans coexist with AI?
In this presentation, I propose that AI should coexist with humans to promote human relational autonomy. In ethics and political philosophy, autonomy refers to self-governance (Christman, 2020). This concept is of great importance and can be pursued by various societies (Raz, 1986). Relational autonomy is a concept proposed mainly by feminist philosophers, in opposition to the concept of autonomy interpreted in an individualistic way (Mackenzie & Stoljar, 2000). According to these philosophers, autonomy can be nurtured or damaged through social relationships (Stoljar, 2024). By adopting this perspective, it is possible to consider whether autonomy is socially constructed and effective in society. Here, though, it should be noted that in the traditional theory of relational autonomy, “society” implies social norms, legal systems, and human relationships (Mackenzie, 2014). It is important to consider how these factors influence individual autonomy. However, existing research has failed to analyze the impact of technological artifacts and systems, particularly robots and AI, on the relational autonomy of persons. Therefore, the author first provides a framework for developing a theory of relational autonomy that considers technological artifacts and systems. The author then presents a norm for existing with AI, i.e. to protect and cultivate human relational autonomy through this medium. The discussion above provides both a normative analysis of existing AI technology and guidelines for its desirable development in future.
Elay Shech (Auburn University): Bias, Conceptual Engineering, and Artificial Agents
This presentation concerns the psychological and social consequences of interactions between artificial and human agents, and the ethical issues that arise in design and development of artificial agents. We will focus specifically on large language models (LLMs), such as OpenAI’s ChatGPT, which reflect and can potentially perpetuate, social biases in language use3⁄4behavior that is likely to become widespread as LLMs take prevalent roles in social interactions. Conceptual engineering aims to revise our concepts to eliminate such bias. We show how machine learning and conceptual engineering can be fruitfully brought together to oJer new insights to both conceptual engineers and LLM designers. Specifically, we suggest that LLMs can be used to detect and expose bias in the prototypes associated with concepts, and that LLM de-biasing can serve conceptual engineering projects that aim to revise such conceptual prototypes. At present, these de- biasing techniques primarily involve approaches requiring bespoke interventions based on choices of the algorithm’s designers. Thus, conceptual engineering through de-biasing will include making choices about what kind of normative training an LLM should receive, especially with respect to diJerent notions of bias. This oJers a new perspective on what conceptual engineering involves and how it can be implemented. And our conceptual engineering approach also oJers insight to those engaged in the design and de-biasing of LLMs. Namely, when it comes to conceptual engineering, the main focus for theorists interested in influencing concepts has been broadly semantic or definitional. We think this is overly narrow. Broadening the purview of conceptual engineering to include prototypes helps us see how de-biasing can be a tool to influence concepts, especially socially important ones. This method of conceptual engineering also opens up new ways of thinking about the implementation of conceptual engineering projects. For LLM designers, looking at de biasing as a tool for conceptual engineering is a way to bring to the forefront the normative questions that must be addressed in deciding how and when to de-bias LLMs. We draw attention to the importance of thinking about bias and associated concepts philosophically, before deciding on a concrete tripartite framework for how to de-bias, especially given the bespoke nature of the work.
We end by reflecting on ostensible implications of account for questions about the moral status of artificial agents and human-AI interaction (such as when interacting with LLMs) in light of David Gunkle’s relational approach to robot rights that emphasis extrinsic social relationships. For example, if AI systems are reasonably considered moral agents, or, at least welfare subject capable of having well-being, is it morally permissible to de-bias LLMs as tools for conceptual engineering? Similarly, is there a moral imperative to de-bias LLMs as a king of habituation that is essential for the development of character and virtues?