Natasha Alechina (Open University Netherlands / Utrecht University, The Netherlands)
Norms and Reinforcement Learning
In this talk, I will discuss reinforcement learning with norms, where the aim is to learn a policy such that all agent executions comply with the norm. I will focus on norms represented as Mealy machines, which, for every state (context) of the norm, output the set of actions that do not lead to a norm violation. Such norms can be used similarly to shields in safe reinforcement learning, to train agents to adhere to the norm during both training and deployment. Previous work on synthesising Mealy machines to prohibit agent actions for reinforcement learning has focused on temporal logic specifications expressed in safety LTL, or Pure Past Temporal Logic. I will present an approach in which specifications are expressed in Alternating Time Temporal Logic with Strategy Contexts, which generates minimum cost norms for multi-agent systems.
Henry Prakken (Utrecht University, The Netherlands)
An argumentation approach to modelling actual causation in the law
In this talk I will present an argumentation-based formal model of actual causation in the law (also called cause-in-fact). I will then put this model in the wider perspective of a formal account of relevance of arguments in abstract argumentation frameworks.
The starting point of our approach is Richard Wright's recommendation to separate purely epistemic aspects of cause-in-fact from legal concerns related to liability and responsabiliy. We only address the first issue. Specifically, we adopt the so-called NESS account of cause-in-fact, introduced by Hart and Honore in 1949 and further developed by Richard Wright and others. According to NESS, a fact C is a cause of an effect E if and only if C is a necessary component of a set of conditions which are jointly sufficient for E to occur.
Our choice for an argumentation approach is motivated by structural similarities between NESS and notions of relevance developed in the formal study of argumentation. Accordingly, we first formalise our model of cause-in-fact in ASPIC+ and then embed our model in a more abstract account of how arguments can be relevant for the acceptability status of other arguments.
Christian Straßer (Ruhr University Bochum, Germany)
Deontic Argumentation Calculi: A Modular and Explanatory Framework for Normative Reasoning
In this talk I will present some recent developments in Deontic Argumentation Calculi (DACs), based on joint work with Kees van Berkel and Zheng Zhou. DACs utilize a sequent-based proof-theoretic methodology that is integrated with formal argumentation to accommodate reasoning in the presence of deontic conflicts. The talk will have three focus points. First, I will show that due to its highly modular nature, the DAC framework can accommodate well-known nonmonotonic reasoning styles from the literature, such as input-output logics and default logic. Second, reasoning with disjunctions is challenging to model in defeasible deontic logics. I will present how DACs can obtain intuitive outcomes in complex scenarios by carefully modeling the defeasibility of individual disjunctive paths when reasoning with cases. Third, I will highlight various ways in which DACs can be used to generate explanations, i.e., answers to questions such as "Why should I do A (instead of B)?".
Pedro Cabalar (Corunna University, Spain)
Normative Reasoning with Deontic Answer Set Programming
A rigorous verification of the compliance of Artificial Intelligence (AI) systems with social, ethical and legal regulations requires, in the first place, the design of formal languages for an accurate specification of norms. Normative reasoning falls under the domain of deontic logic, that studies the formalisation of concepts such as obligations, permissions and violations. In this talk, we will review a recent approach to normative reasoning based on a deontic extension of Answer Set Programming (ASP), a successful paradigm for practical Knowledge Representation and problem solving. To this aim, we will start reviewing the extension of Equilibrium Logic (the logical basis of ASP) to cope with deontic operators and their combination with explicit negation. Then, we will illustrate the use of this approach by formalising several challenging examples from the deontic literature. Finally, we will move to consider temporal normative reasoning and the kind of concepts that arise when we combine deontic operators with time, explaining their recent formalisation based on Temporal Deontic Equilibrium Logic.
Ilaria Canavotto (University of Maryland, US)
Formal Models of Case-Based Normative Reasoning: Insights from AI and Law
Reasoning with precedents—drawing analogies to past situations—is a common and intuitive way to gain normative guidance in new situations. But how, exactly, does this reasoning work? And how can normative guidance be systematically derived from precedent decisions? These questions call for a logical analysis of precedent-based reasoning. Yet, because of its reliance on analogies and adversarial nature, this form of reasoning is often seen as resistant to such analysis. A key contribution of the field of artificial intelligence (AI) and law has been to challenge this view: not only can important aspects of precedent-based reasoning be formalized and automated, but the constraints governing it can also be rigorously analyzed through logical methods.
In this tutorial, I will present an overview of formal models of case-based reasoning and precedential constraint developed in AI and law, and discuss how these models help clarify the structure and logic of reasoning with precedents. Along the way, I will highlight important conceptual questions raised by these models, and briefly discuss their potential relevance to the development of explainable AI methods.
Elisa Freschi (University of Toronto, Canada) & Josephine Dik (TU Wien, Austria)
New sources for deontics: The Mīmāṃsā school of Sanskrit philosophy
Mīmāṃsā is one of the most influential schools of Sanskrit philosophy. It focused for millennia on the Vedas, the sacred texts of what is now called “Hinduism” and primarily on the commands found in them and on their structure. It is thus a treasure trove of ideas and challenges to be explored by contemporary deontic logicians. The deontic operators of Mīmāṃsā behave significantly differently from the deontic properties that are often assumed to hold in contemporary deontic logic. For example, prescriptions and prohibitions are not mutually definable (e.g., “X is obligatory” is not tantamount to “not-X is forbidden”), and permissions are not defined as the counterpart of prohibitions (“Z is permitted” is not tantamount to “Z is not forbidden”). In contrast, Mīmāṃsā authors consider permissions to be always exceptions to previous prohibitions or negative obligations.
In the last eight years, the deontic theory of Mīmāṃsā has been formalised through a series of works, but much remains to be done to expand on all the potential inputs that Sanskrit philosophy can offer to modern deontic logic.
This tutorial is articulated in two parts. The first part will discuss the historical and philosophical aspects of the reflections on deontic concepts as articulated by Mīmāṃsā authors. We will discuss deontic paradoxes found in Mīmāṃsā sources as well as deontic paradoxes studied in con- temporary deontic logic and their possible Mīmāṃsā-inspired solutions.
The second part will explore how to transform such reflections into formulas and will explore their significance for reflections in contemporary deontic logic, and how they behave in connection with some of the most common deontic paradoxes.
Matthias Scheutz (Tufts Institute for AI, US)
How to Design Norm-Following Agents
In this tutorial, we will review and discuss the two main approaches for designing norm following agents: (1) implicit norm-followers, i.e., agents that learn to exhibit norm-following behavior without representing norms, and (2) explicit norm-followers, i.e., agents that explicitly represent norms in some logical formalism and use it for action selection. We will then focus on "hybrid" agents that combine formal norm specifications with methods for learning how to obey those specifications, and end with a set of open questions for future work.
For questions concerning DEON: deon2025@logic.at.