We are currently working on the schedule for the following meet-up. Once confirmed, it will be published here. Stay tuned!
12:00 –12:45
The meet-up will start with a lunch, providing an opportunity to network with fellow participants and connect informally before the sessions begin.
12:45 –13:00
Welcome and Introductions
A brief overview of the meet-up goals and schedule.
13:00–14:00
Abstract: In this talk I will give an overview of computational models of argumentation and I will discuss how these models could be used for explainable AI. Argumentation has both inferential and dialogical aspects. Argumentation as a form of reasoning makes explicit the reasons for the conclusions that are drawn and how conflicts between reasons are resolved with preferences. Argumentation as a form of dialogue is about protocols for resolving conflicts of opinion by verbal means. Both aspects of argumentation have been computationally modelled, and some of this work has been applied to explainable AI.
Coffee Break and Networking (15 minutes)
14:15–15:15
Short (5-7 minute) talks by researchers, focusing on challenges, innovative solutions, ideas, and collaboration opportunities.
Aashutosh Ganesh (Maastricht University)
How do we design explanations for deep learning models applied to video
Dina Zilbersthein (Maastricht University)
Fair and transparent recommender systems for Advertisements
Roan Schellingerhout (Maastricht University)
The Effect of Cognitive Orientation on Textual Explanation Preferences
Sofoklis Kithardis (Leiden University)
Fair Multi-Objective Machine Learning
Isel Grau (TU/e)
SOFI: A Sparseness-Optimized Feature Importance method
Mohsen Abbaspour Onari (TU/e)
From Explainability to Trustworthiness: The Role of Explainability in Building Trust in AI
Jesse Heyninck (Open University)
Combining LLMs and Answer Set Programming for Safe and Transparent AI
Coffee Break and Networking (15 minutes)
15:30–16:10
Longer talks (20 minutes) focusing on specific challenges, innovative solutions, ideas, and collaboration opportunities.
Meike Nauta (TU/e, Datacation)
Explainable AI: The who, why, what, and how for success in practice
Abstract: Explainable AI plays a pivotal role in making AI successful in real-life applications, but achieving this requires addressing more than just technical challenges. This talk delves into the "why, who, what, and how" of AI projects, aiming to bridge the gap between academia and industry to foster collaboration for responsible and impactful AI implementations.
Niki van Stein (Leiden University)
XAI for time-series and semantic continuity
Abstract: The increasing complexity of machine learning models calls for explainable artificial intelligence (XAI) methods that not only clarify model predictions but also align with human understanding. In this talk, we explore innovative approaches to XAI, focusing on two key areas: time-series analysis and semantic continuity. Using real-world examples from genomics, predictive maintenance, and counterfactual explanations, we examine how global sensitivity analysis and optimization techniques enhance interpretability across domains. We also introduce the concept of semantic continuity—how minimal semantic changes in input data should correspond to subtle changes in model outputs—and discuss its implications for evaluating and benchmarking XAI methods. Through a mix of case studies and proof-of-concept experiments, we delve into the challenges and opportunities of developing XAI systems that bridge the gap between technical complexity and meaningful insights. This talk will provoke discussion on how to advance fairness, usability, and trust in XAI.
Coffee Break and Networking (10 minutes)
16:20–17:00
Longer talks (20 minutes) focusing on specific challenges, innovative solutions, ideas, and collaboration opportunities.
Roos Scheffers (Utrecht University)
Empirically investigating argumentation-based explanations
Abstract: In formal argumentation theory, multiple argumentation-based explanation methods have been formulated based on ideas from social and cognitive science. However, these have not yet been empirically validated. To be able to apply argumentation for explanations we need to test whether argumentation-based explanations can be understood by people, and if they do, then which of the available argumentation-based explanation options people prefer. This study describes and empirically validates two types of relatedness used in argumentation-based explanations, related admissibility, and directly related admissibility. This was done by instructing participants to select arguments from an argumentation framework to explain another argument in this framework. These explanations selected by the participants were compared to argumentation-based explanations that use relatedness. We found that both forms of relatedness are cognitively plausible. This gives insight into how argumentation theory can be applied in the real world to provide explanations.
Loan Ho (VU)
Dialogue-based Explanations for Logical Reasoning using Structured Argumentation
Abstract: The problem of explaining inconsistency-tolerant reasoning in knowledge bases (KBs) is a prominent topic in Artificial Intelligence (AI). In this talk, I present a generic argumentation-based approach to address these problems. This approach is defined for logics involving reasoning with maximal consistent subsets and shows how any such logic can be translated to argumentation. Then our work provides dialogue models as dialectic-proof procedures to compute and explain a query answer wrt inconsistency-tolerant semantics. This allows us to construct dialectical proof trees as explanations, which are more expressive and intuitive than existing explanation formalisms.
17:00–17:30
Feedback Round: Participants share insights from the meet-up and provide suggestions for future events.
Open Mic: Time for attendees to share quick updates, findings, or challenges.
Community Planning: Collaboratively decide on potential themes and the venue for the next meet-up.