Abstracts

Jan Broersen

Further considerations on causal responsibility
We consider the problem of formally defining backward looking responsibility for outcomes in a non-deterministic world with agents having had the opportunity to interfere at multiple moments. We formalise two core modes of (causal) responsibility: (1) being the agent who initialised a course of events that led to an outcome, (2) being an agent that had the opportunity to intervene in a course of events that led to an outcome, but refrained from doing so. We point to further causal (but-for) modes but leave their formal definitions to future research. Surprisingly, the categories of causal responsibility we define, have not been studied before in the literature; the only exception being the so called 'achievement stit'. We apply our insights to further investigate the controversial claim that causal modelling is *not* a good starting point for thinking about responsibility. 

Ilaria Canavotto

Explanation through legal precedent-based reasoning 

Computational models of legal precedent-based reasoning developed in the field of Artificial Intelligence and Law have recently been applied to the development of explainable Artificial Intelligence methods. The key idea behind these approaches is to interpret training data as a set of precedent cases; a model of precedent-based reasoning can then be used to generate an argument supporting a new decision (or prediction, or classification) on the basis of its similarity to a precedent case [3,4,5]. In this talk, which builds on [1,2], I will present a model of precedent-based reasoning that provides us with an alternative way of generating arguments supporting a new decision: instead of citing similar precedent cases, the model generates arguments based on how the base-level factors present in the case to be explained support higher level concepts. After presenting the model, I will discuss some open questions and work in progress. 

References



Agata Ciabattoni

Normative reasoning: From Sanskrit philosophy to AI

Normative statements, which involve concepts such as obligation and prohibition, are enormously important in a variety of fields, from law and ethics to artificial intelligence. Reasoning with and about them requires deontic logic, which is a quite recent area of research. By contrast, for more than two millennia, one of the most important systems of Indian philosophy focused on analyzing normative statements. Mīmāṃsā, as it is called, looks at these statements found in the Vedas, the sacred texts of (what it is now called) Hinduism, and interprets them by explaining precisely what course of action they require. This talk will describe our findings on the deontic reasoning of Mīmāṃsā [1], and ideas on how to apply them to design autonomous agents sensitive to legal, social and ethical norms [2]. The results I will present arise from a collaboration between logicians, sanskritists and computer scientists.

Research Projects

Mario Günther

A theory of responsibility for artificial agents
In this talk, I propose a theory of responsibility for artificial agents. An artificial agent is responsible for an outcome just in case her actions caused the outcome, she believed that her actions may cause the outcome, and she intended the outcome. I will spell out the employed notions of causation, belief, and intention, and respond to the worry that responsibility does not entail causation. I will conclude that responsibility and explainability go hand in hand. 

John Horty

Knowledge representation for computational normative reasoning

I will talk about issues involved in designing a machine capable of acquiring, representing, and reasoning with information needed to guide everyday normative reasoning - the kind of reasoning that robotic assistants would have to engage in just to help us with simple tasks.  After reviewing some current top-down, bottom-up, and hybrid approaches, I will define a new hybrid approach that generalizes ideas developed in the fields of AI and law and legal theory.  


Joint work with Ilaria Canavotto.

Réka Markovich

Responsibility and rights

The usage of the word 'responsibility' suggests different interpretations without clear reference to the possible misunderstanding (and its resolution) this might bring. The American legal theorist, Hohfeld had the very same starting point about the word 'right' when he put forth his famous paper on the fundamental legal conceptions, which became the basis for the formal theory of normative positions and a whole tradition within deontic logic. In this talk, I will bring these two together highlighting some correspondence between the different possible interpretations of responsibility and the theory of normative positions.

Gabriella Pigozzi

Argumentation spaces: Two case studies and a first model

When modelling collective deliberation, it is generally assumed that all agents share the same notion of what constitutes a good argument. But, as individuals do not have the same beliefs, knowledge, values and goals, they may disagree on what constitutes good evidence or proof.


Building on two case studies, we'll show that arguments intervening in many public debates come from and move across different spaces. Each space has its standard for creating and evaluating arguments. During the Covid-19 pandemic, for instance, we witnessed a rapid construction and circulation of scientific arguments in public spaces. Scientific findings were debated in and judged by worlds other than science, where the confrontation, the approval of arguments and the reasoning follow different standards.


In this talk, we'll present two case studies and a first formal model of a controversy involving different argumentation spaces.


Joint work with Juliette Rouchier and Dov Gabbay.

Henry Prakken

An introduction to the ASPIC+ framework for defeasible argumentation

One concern in logical models of responsible agency is the defeasibility of reasons for action. In this talk I will give an overview of the ASPIC+ framework for defeasible argumentation. ASPIC+ is inspired by the seminal work of John Pollock on defeasible reasoning, and is designed to generate abstract argumentation frameworks in the sense of Dung.


References

Giuseppe Primiero

Trustworthy AI: probabilities meet possible worlds

Conceptual and formal approaches for modeling trustworthiness as a (desirable) property of AI systems are emerging in the literature. To develop logics fit for this aim means to analyse both the non-deterministic aspect of AI systems and to offer a formalization of the intended meaning of their trustworthiness. In this work, we take a semantic perspective on representing such processes, and provide a measure on possible worlds for evaluating them as trustworthy. In particular, we intend trustworthiness as the correspondence (within acceptable error limits) between a model in which the theoretical probability of a process to produce a given output is expressed, and a model in which the frequency of such output in a relevant number of tests is measured. This semantics characterizes the probabilistic typed natural deduction calculus TPTND introduced in previous works, setting the stage for an epistemic logic appropriate to the task of reasoning about knowledge of trustworthy non-deterministic processes.

Joint work with Ekaterina Kubyshkina

Leon van der Torre

The Jiminy Advisor:  Moral agreements among stakeholders based on norms and argumentation
We present a framework for distributing normative reasoning across various normative systems, each with their own stakeholder and sets of norms, and a mechanism for resolving moral dilemmas based on formal argumentation and a defeat relationship between arguments. Dilemmas traverse an 'escalation ladder' at which dilemmas are solved by either combining the arguments of individual systems introducing possible new defeats, combining the systems to generate additional, combined arguments introducing new defeats or, finally, relying on an additional stakeholder, referred to as Jiminy, to provide a (context-dependent) priority relation between stakeholders in order to remove certain defeats. The framework is supported by a running example and a high-level discussion on the integration of the framework in chosen, existing agent architectures. The resulting and proposed Jiminy advisor model is discussed from the perspective of explainability and in comparison with related work.


Joint work with Beishui Liao, Pere Pardo and Marija Slavkovik

To appear in: Journal of Artificial Intelligence Research (JAIR)