# SCHOOL ON Logic Programming

## 31 JULY- 01 AUGUST 2022

## Haifa, Israel

Students and researchers interested in topics in logic programming are invited to attend the 2022 Logic Programming School (31st July-01st August, Haifa, Israel). It will take place during the 38th International Conference on Logic Programming (ICLP) (July 31st-August 8th, 2022, Haifa, Israel).

The 2-day school is suited for those who wish to learn advanced topics in logic programming and constraint programming. It will consist of four tutorials.

## CONFIRMED SPEAKERS

### Effective Modeling in Answer Set Programming modulo Theories

**Speaker:** Martin Gebser.

**When: **31st July - 09:00 - 10:30.

**Whe****re****:** room Taub 8.

Answer Set Programming is a declarative problem solving paradigm featuring high-level modeling languages and efficient solving systems. While first-order rules allow for concise, human-readable encodings of problem solutions, whose automated computation incorporates state-of-the-art declarative database and Boolean constraint reasoning methods, the effective modeling of quantitative, spatial and temporal conditions is particularly challenging. This tutorial discusses common pitfalls in knowledge representation and modeling techniques to overcome them, taking advantage of the basic Answer Set Programming formalism as well as extensions by theories like difference logic, finite-domain constraints or mixed-integer programming. Such theories enhance the high-level modeling and powerful reasoning capacities of Answer Set Programming to deal with specific application challenges in planning and scheduling, design and configuration, intelligent robotics, logistics management, systems biology and more.

### How to marry neural networks and answer set programming

**Speaker:** Joohyung Lee.

**When: **1st August - 11:00 - 12:30.

**Where:** room Taub 8.

Neuro-symbolic AI aims to combine deep neural network learning and symbolic AI reasoning, which look intrinsically different from each other on the surface. Neural network learning optimizes continuous values, whereas logical reasoning tends to be discrete. Despite the vast success of deep neural networks, it is unclear how knowledge can be conveniently represented and how complex high-level reasoning can be computed in deep neural networks. I will introduce two methods in this direction. NeurASP integrates neural networks with Answer Set Programming (ASP). By treating the neural network output as the probability distribution over atomic facts in answer set programs, NeurASP provides a simple and effective way to integrate sub-symbolic and symbolic computation. I will explain how NeurASP can make use of a pre-trained neural network in symbolic computation and how it can improve the neural network’s perception result by applying symbolic reasoning in answer set programming. Also, NeurASP can make use of ASP rules to train a neural network better so that a neural network not only learns from implicit correlations from the data but also from the explicit complex semantic constraints expressed by the rules. Next, to have a more efficient computation, I will describe a systematic way to represent logical constraints as a loss function borrowing a method used to train binary neural networks to make a discretizing function meaningfully differentiable.

### Logic and Problem Solving with Prolog

**Speaker:** Paul Tarau.

**When: **31st July - 11:00 - 12:30.

**Where:** room Taub 8.

We start with a short history of Prolog and its origins in using logic as a problem solving tool. After introducing the unification algorithm and its use for Horn clause resolution, we describe Prolog's backtracking-driven answer search mechanism via a small meta-interpreter and a sketch of its Python equivalent.

After a short discussion of Prolog's control-structures and dynamic database operations, we put Prolog at work by solving efficiently a few interesting problems among which a Sudoku problem generator and solver, a compact type inference algorithm for simply-typed lambda terms and a theorem prover for intuitionistic propositional logic.

After introducing Prolog's definite clause grammars and their applications to natural language processing, we overview state-of-the-art Prolog systems and their extensions among which tabling, constraint solving, multi-threading and co-routining mechanisms.

We conclude with a reflection on future evolutions of Prolog and its descendent logic programming languages, with focus on neuro-symbolic mechanisms for building knowledge-driven deep-learning systems.

### Argumentation frameworks for explainable AI

**Speaker:** Francesca Toni.

**When: **1st August - 09:00 - 10:30.

**Where:** room Taub 8.

Argumentation, initially studied in philosophy and law, has gained popularity over the years as a knowledge representation and reasoning formalism in AI to support, in particular, non-monotonic and paraconsistent reasoning, as well as various forms of decision-making. Simply stated, argumentation in AI focuses

on various abstractions of "debates", generically termed argumentation frameworks, capturing interactions where parties plead for and against some conclusions. Argumentation frameworks are equipped with semantics, algorithms and systems for drawing dialectically valid or strong conclusions, while at the same time lending themselves naturally to explaining (dialectically) why

the conclusions are drawn. In its most abstract form an argumentation framework consists just of a set of arguments and a binary relation representing attacks between arguments. By instantiating the notion of arguments and the attack relation, different logic-based argument frameworks

can be obtained, including assumption-based argumentation frameworks. Additional dialectical relations (notably support) can also be accommodated to obtain bipolar argumentation frameworks. Argumentation frameworks can serve as abstractions of several AI systems, thus providing a natural step towards defining explanations for their outputs.

This tutorial will consist of three parts.

In the first part, I will survey some existing approaches to argumentation in AI, focusing on abstact, assumption-based, bipolar and quantified bipolar argumentation frameworks. In particular, I will overview semantics and computational machinery underpinning these argumentation frameworks, as well as their relationships with a variety of other frameworks to support reasoning (in particular non-monotonic reasoning, classical reasoning and paraconsistent

reasoning). Semantics will range from qualitative, acceptability semantics (in terms of extensions as well as labellings) to quantitative, gradual semantics to ascertain the dialectical strength of arguments. The tutorial will also touch upon the integration of argumentative and probabilistic reasoning within

probabilistic argumentation framework.

In the second part, I will briefly introduce the research field of explainable AI (XAI), which has witnessed unprecedented growth in recent years prompted by the need to guarantee that automated decision tools are transparent. I will motivate argumentation frameworks as strong contendants for supporting XAI, given that their dialectical nature appears to match some basic desirable features of the explanation activity.

In the third part, I will overview (a variety of) XAI solutions built using argumentation frameworks. I will focus on explanations for a number of applications,ranging from machine learning-based classifiers, case-based reasoning, decision-making and scheduling and recommender systems. Specifically, I will show how these applications can be understood as argumentation frameworks (of one type or another) and how different forms of explanations can be drawn from these frameworks.

## Registration

Registration is part of the Pre-FLoC workshops registration.

## Sponsors

## Organizer

Carmine Dodaro, University of Calabria