All You Ever Need to Know About Counterfactual Explanations

Fundamentals, Methods, & User Studies for XAI

Full-day tutorial at IJCAI 2024


đź“…August 3rd -- 09:00 AM to 5:30 PM

đź“ŤRoom 1F-Yeongju A

How to cite

If you are using / referring to the materials in your own work, please cite

Artelt, A., Keane, M.T., Kuhl, U. (2024). All You Ever Need to Know About Counterfactual Explanations (Tutorial). 33nd International Joint Conference on Artificial Intelligence (IJCAI-24), Jeju, South Korea (August).

Abstract

Recent research has extensively explored counterfactual explanations, with over a thousand papers proposing at least 150 distinct algorithms, offering a psychologically appealing and regulatory compliant solution to the eXplainable AI (XAI) problem.

This full-day tutorial will provide a comprehensive, practical guide to this topic, with hands-on sessions on theoretical foundations, modeling approaches, and both computational and psychological evaluation methodologies.

Tutorial Description

EXplainable AI (XAI) has focused on the development of explanation methods that address the increasing public concern around the transparency, fairness, and audibility of artificial intelligence (AI) systems, aiming to make them more understandable and compliant with governmental regulations (such as the EU AI Act).   The use of counterfactual explanations has emerged as a very popular explanation strategy, as they appear to be readily understood by people while meeting regulatory requirements (such as the EU GDPR).

For example, if a smart agriculture (SmartAg) system monitoring a farm predicts high nitrate pollution and the farmer asks ``Why?'', they could be told, ``If you could decrease your fertilizer levels by 20%, then you would significantly lower your nitrate pollution''. Such counterfactual explanations have several potentially attractive properties: (i) they meet the SmartAg company's legal requirement to provide an explanation for an automated decision, in a way that (ii) the end user can understand, (iii) provides practical, actionable advice, and that (iv) improves the overall outcome of the decision-making process (i.e., the farm is more sustainable). However, the interdisciplinary nature of counterfactual research often presents challenges for computer scientists, as there is a need to consider psychological, legal, and social contexts alongside computational requirements.   Hence, in this tutorial, we aim to educate participants, in a practical hands-on way, about the computational and psychological underpinnings of this research area.


A Counterfactual Explanation in SmartAg: A farmer does not understand the AI System’s pollution predictions in the coming month and asks for an explanation. The AI uses a counterfactual explanation to justify the prediction and convince the user to use less fertilizer than planned. The result is an actionable insight that saves the farmer money, improves environmental sustainability, and bolsters trust in the system.

Schedule

Presenters

Bielefeld University, Germany

University of Cyprus, Cyprus

Bielefeld University, Germany

University College Dublin, Ireland