In recent years, counterfactual explanations have emerged as a key technique in eXplainable AI (XAI). Counterfactuals allow AI systems to provide users with understandable and actionable insights by proposing minimal changes to inputs that would yield a different outcome. This half-day tutorial at AAAI 2025 delves into the rapidly growing area of counterfactual explanations.
We will introduce participants to the fundamentals and practical applications of counterfactual explanations, exploring how they can improve transparency and meet regulatory standards, all while fostering user trust. Aligned with AAAI-25’s "Collaborative Bridge Theme," this session spans disciplines, combining perspectives from machine learning, psychology, and human-centered design.
Aimed at machine learning researchers and AI practitioners who may be new to XAI or lack user study experience, this tutorial offers both foundational and hands-on learning. Participants will first explore the philosophical and psychological underpinnings of counterfactual reasoning, understanding why these explanations resonate so strongly with human thought processes. They will then learn practical methods for generating counterfactual explanations, using tools such as the DiCE toolbox to create diverse, understandable, and feasible counterfactuals. Through interactive sessions, we will also cover best practices in user study design, including strategies for structuring, running, and analyzing studies that evaluate the impact of XAI on end-users.
By the end of the tutorial, attendees will gain a comprehensive understanding of counterfactual XAI, learn practical implementation techniques, and develop skills in user study design and data analysis. Participants will leave equipped with the knowledge and tools to enhance transparency and trust in AI systems through counterfactual explanations and robust human-centered evaluation. This tutorial is ideal for professionals looking to expand their technical expertise and research abilities in the evolving field of XAI.
Bielefeld University, Germany
As a cognitive scientist by training, Ulrike's post-doctoral research promotes a human-centric approach to AI. With her work, she seeks to provide empirical evidence to inform the design, development, and deployment of eXplainable AI systems that effectively meet user's needs, preferences, and evoke user's trust.
Bielefeld University, Germany
University of Cyprus, Cyprus
André is a post-doctoral researcher interested in Trustworthy AI and AI for Critical Infrastructure. He mainly focuses on eXplainable AI – in particular on contrasting explanations. Besides fundamental research, he also works on applications of XAI, such as to water distribution networks, transportation, and decision support systems for business owners.
University College Dublin, Ireland
Mark is Chair of Computer Science at University College Dublin. His work is split between Cognitive and Computer Science,
His cognitive science research has been in analogy, metaphor, conceptual combination and similarity. His computer science research has been in natural language processing, machine learning, case-based reasoning, text analytics and, more recently, explainable artificial intelligence.