2nd Workshop on

Interactive Natural Language Technology for Explainable Artificial Intelligence

Program

The virtual program for 18 December is out!

  • Proceedings are published by ACL Anthology

  • Time (GMT) - synchronized with the rest of INLG workshops

[11:00 - 11:05] Welcome
[11:05 - 12:00] Keynote Talk

Prof. Emiel Krahmer (Univ. Tilburg):

"Automatically explaining health information" [SLIDES]

Modern AI systems automatically learn from data using sophisticated statistical models. Explaining how these systems work and how they make their predictions therefore increasingly involves producing descriptions of how different probabilities are weighted and which uncertainties underlie these numbers. But what is the best way to (automatically) present such probabilistic explanations, do people actually understand them, and what is the potential impact of such information on people’s wellbeing? In this talk, I adress these questions in the context of systems that automatically generate personalised health information. The emergence of large national health registeries, such as the Dutch cancer registry, now make it possible to automatically generate descriptions of treatment options for new cancer patients based on data of comparable patients, including health and quality of life predictions following different treatments. I describe a series of studies, in which our team has investigated to what extent this information is currently provided to people, and under which conditions people actually want to have access to these kind of data-driven explanations. Additionally, we have studied whether there are different profiles in information needs, and what the best way is to provide probabilistic information and the associated uncertainties to people.

Emiel Krahmer is a full professor in the Tilburg center for Cognition and Communication (TiCC). In his research, he studies how people communicate with each other, and how computers can be taught to do the same, to improve communication between humans and machines. The communication technologies he and his team develop have applications in, for example, media (automatic generation of news reports), health (data-driven treatment decision aids, chatbots promoting healthy behaviour) and education (social robots teaching children a second language).

[12:00 - 12:30] Oral Presentations (Session I)

Content Selection for Explanation Requests in Customer-Care Domain

Luca Anselma, Mirko Di Lascio, Dario Mana, Alessandro Mazzei and Manuela Sanguinetti

ExTRA: Explainable Therapy-Related Annotations

Mat Rawsthorne, Tahseen Jilani, Jacob Andrews, Yunfei Long, Jeremie Clos, Samuel Malins and Daniel Hunt

[12:30 - 12:50] Virtual Coffee Break
[12:50 - 14:20] Oral Presentations (Session II)

The Natural Language Pipeline, Neural Text Generation and Explainability

Juliette Faille, Albert Gatt and Claire Gardent

Towards Harnessing Natural Language Generation to Explain Black-box Models

Ettore Mariotti, José M. Alonso and Albert Gatt

Explaining Bayesian Networks in Natural Language: State of the Art and Challenges

Conor Hennessy, Alberto Bugarín Diz and Ehud Reiter

Explaining data using causal Bayesian networks

Jaime Sevilla

Towards Generating Effective Explanations of Logical Formulas: Challenges and Strategies

Alexandra Mayn and Kees van Deemter

Argumentation Theoretical Frameworks for Explainable Artificial Intelligence

Martijn Demollin, Qurat-Ul-Ain Shaheen, Katarzyna Budzynska and Carles Sierra

[14:20 - 15:00] Lunch Break

[15:00 - 16:00] Keynote Talk

Prof. Eirini Ntoutsi (Leibniz Universität Hannover & L3S Research Center)

"Bias in AI-systems: A multi-step approach" [SLIDES]

Algorithmic-based decision making powered via AI and (big) data has already penetrated into almost all spheres of human life, from content recommendation and healthcare to predictive policing and autonomous driving, deeply affecting everyone, anywhere, anytime. While technology allows previously unthinkable optimizations in the automation of expensive human decision making, the risks that the technology can pose are also high, leading to an ever increasing public concern about the impact of the technology in our lives. The area of responsible AI has recently emerged in an attempt to put humans at the center of AI-based systems by considering aspects, such as fairness, reliability and privacy of decision-making systems.

In this talk, we will focus on the fairness aspect. We will start with understanding the many sources of bias and how biases can enter at each step of the learning process and even get propagated/amplified from previous steps. We will continue with methods for mitigating bias which typically focus on some step of the pipeline (data, algorithms or results) and why it is important to target bias in each step and collectively, in the whole (machine) learning pipeline. We will conclude this talk by discussing accountability issues in connection to bias and in particular, proactive consideration via bias-aware data collection, processing and algorithmic selection and retroactive consideration via explanations.

Eirini Ntoutsi is currently an associate professor at the Leibniz University of Hanover (LUH), Germany and member of the L3S Research Center. Prior to joining LUH, she was a post-doctoral researcher at the Ludwig-Maximilians-University (LMU) in Munich, Germany in the group of Prof. H.-P. Kriegel. She joined the group as a post-doctoral fellow of the Alexander von Humboldt Foundation. She holds a PhD in Data Mining from the University of Piraeus, Greece and a master and diploma in Computer Engineering and Informatics from the University of Patras, Greece. Her research interests lie in the fields of Artificial Intelligence (AI) and Machine Learning (ML) and can be summarized as developing methods for i) learning over complex data and data streams, covering aspects such as adaptive learning, change detection and stability as well as ii) responsible AI, covering aspects such as fairness-aware learning, data quality and proper evaluation of AI/ML methods. Her research is supported by prestigious funding bodies, including the European Commission, the German Research Foundation, the Volkswagen Foundation etc. She is the network coordinator of the Marie Skłodowska-Curie Innovative Training Network NoBIAS – Artificial Intelligence without Bias. She serves on several boards and organizing committees including the ACM Intl Conference on Information and Knowledge Management (CIKM 2020) as demo and posters co-chair, the IEEE International Conference on Data Mining (ICDM 2017) as publicity co-chair and the German Machine Learning and Data Mining community meeting (KDML 2019) as program co-chair. She serves in the technical committee of many international conferences and she is a frequently reviewer for a number of international technical journals.

[16:00 - 16:30] Oral Presentations (Session III)

Toward Natural Language Mitigation Strategies for Cognitive Biases in Recommender Systems

Alisa Rieger, Mariët Theune and Nava Tintarev

When to explain: Identifying explanation triggers in human-agent interaction

Lea Krause and Piek Vossen

[16:30 - 16:50] Virtual Coffee Break
[16:50 - 17:20] Oral Presentations (Session IV)

Learning from Explanations and Demonstrations: A Pilot Study

Silvia Tulli, Sebastian Wallkötter, Ana Paiva, Francisco S. Melo and Mohamed Chetouani

Generating Explanations of Action Failures in a Cognitive Robotic Architecture

Ravenna Thielstrom, Antonio Roque, Meia Chita-Tegmark and Matthias Scheutz

[17:20 - 18:10] Round Table: "XAI Open Challenges"

Panelists:

  • Emiel Krahmer (University of Tilburg)

  • Eirini Ntoutsi (Leibniz Universität Hannover & L3S Research Center)

  • Albert Gatt (University of Malta)

[18:10 - 18:20] Concluding Remarks and Farewell