The surprising capabilities demonstrated by AI technologies overlaid on detailed data and fine-grained control give cause for concern that agents can wield enormous power over human welfare, drawing increasing attention to ethics in AI. This tutorial introduces ethics as a sociotechnical construct, demonstrating how ethics can be modeled and analyzed, and requirements on ethics (value preferences) can be elicited, in a sociotechnical system.
Ethics is inherently a multiagent concern---an amalgam of (1) one party's concern for another and (2) a notion of justice. To capture the multiagent conception, this tutorial introduces ethics as a sociotechnical construct. Specifically, we demonstrate how ethics can be modeled and analyzed, and requirements on ethics (value preferences) can be elicited, in a sociotechnical system (STS). An STS comprises of autonomous social entities (principals, i.e., people and organizations), and technical entities (agents, who help principals), and resources (e.g., data, services, sensors, and actuators).
This tutorial includes three key elements.
Specifying a decentralized STS, representing ethical postures of individual agents as well as the systemic (STS level) ethical posture.
Reasoning about ethics, including how individual agents can select actions that align with the ethical postures of all concerned principals.
Eliciting value preferences (which capture ethical requirements) of stakeholders using a value-based negotiation technique.
We build upon our earlier tutorials (e.g., at AAMAS 2020, IJCAI 2020, and ACSOS 2020) on engineering ethics in sociotechnical systems and (e.g., at AAMAS 2015 and IJCAI 2016) on engineering a decentralized multiagent system. However, we extend the previous tutorials substantially, including ideas on ethics and values applied to AI. Attendees will learn the theoretical foundations as well as how to apply those foundations to systematically engineer an ethical STS.
This tutorial is presented at a senior undergraduate student level. It is accessible to developers from industry and to students. Typical attendees for our past tutorials have been researchers and practitioners from industry and government, developers, graduate and senior undergraduate students, and university faculty.
There has been an increasing interest in Ethics and AI in recent years and rightly so. Normative and sociotechnical systems have been key topics of interests in the AI literature, specifically, multiagent systems. We will demonstrate in this tutorial that multiagent systems research has much to offer in making AI systems ethical, not just at a single-agent level but at a societal level. We will bring together the latest research including theoretical underpinnings and practical approaches valuable to both researchers and practitioners. However, realizing the full potential of multiagent systems for supporting ethics requires educating a new generation of students and researchers in the relevant concepts. Our tutorial intends to do so with the following objectives.
Motivate and explain a topic of emerging importance for AI and MAS
Introduce novices to major topics within AI and MAS
Introduce expert non-specialists to an AI and MAS subarea