Explainable Artificial Intelligence
from Static to Dynamic
DynXAI Workshop at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) 2023
WORKSHOP Overview
The increasing use of black-box machine learning (ML) models in high-stake decision tasks, such as healthcare and finance, has made the understanding of a model's reasoning inevitable. As a solution, the research field of Explainable Artificial Intelligence (XAI) aims at providing explanations to uncover the internal logic of black-box ML models.
Therein, XAI research primarily focuses on static learning scenarios, in which a model is trained once in a batch setting and explained based on a non-changing data distribution or task description. However, various ML models are applied in dynamic learning environments. Incremental learning from rapid data streams requires ML models to constantly monitor their current performance and quickly react to changes in the underlying data distribution induced by concept drift. In the continual, lifelong learning setting, a change in the learning task induces changes in the underlying ML models, such that new problems might be solved without catastrophic forgetting of earlier solutions. Progressive data science from large datasets such as big data requires a model to be evaluated on a small subset of the data before the models are fully fitted on the whole dataset.
Many real-world applications in high-stakes environments, such as online credit risk scoring for financial services, online sensor network analysis, real-time analysis of big data in healthcare, analysis of energy consumption patterns, or autonomous driving with continual learning, require ML models to dynamically change over time. Faithfully explaining such time-dependent ML models constitutes a challenging task, as these models can change drastically over time, necessitating substantial changes of the resulting explanations. A repeated calculation of traditional batch XAI methods may be costly or inefficient in many application domains. Moreover, it may not be straightforward what constitutes a re-calculation of traditional batch XAI measures, as it may not be clear how a changing data distribution or a re-framing of the task can be incorporated into traditional XAI methods.
Consequently, the explanation of ML models in dynamic environments has recently been studied from different perspectives. In the incremental learning setting, incremental XAI methods have been proposed that provide dynamic feature importance scores for tree-based models (model-specific) and for arbitrary models (model-agnostic). Dynamic XAI methods have also been used to detect and understand concept drift on evolving data streams. They were also introduced in the context of improving model performance. In continual learning, dynamic explanation techniques have been proposed to detect and mitigate catastrophic forgetting.
XAI approaches in areas of interest include:
online learning
incremental learning from data streams
concept drift
continual learning
progressive data science
learning from big data
efficient or real-time explanations
time-dependent explanations
INVITED SPEAKER
Gjergji Kasneci
Responsible Data Science
TU Munich
JOãO GAMA
Laboratory of Artificial Intelligence and Decision Support
University of Porto
ORGANIZER
Barbara Hammer
Bielefeld University
Eyke Hüllermeier
LMU Munich
Fabian Fumagalli
Bielefeld University
Maximilian Muschalik
LMU Munich
PROGRAM COMMITTEE
Eirini Ntoutsi, University of the Bundeswehr Munich, Germany
Jean-Charles Lamirel, Henri Poincaré University, France
Davide Bacciu, Università di Pisa, Italy
Georg Krempl, Utrecht University, Netherlands,
Manuel Roveri, Politecnico di Milano, Italy
João Gama, University of Porto, Portugal
Johannes Haug, Bosch, Germany
Gjergji Kasneci, Technical University of Munich, Germany
Jerzy Stefanowski, Poznan University of Technology, Poland
Vera Hofer, University of Graz, Austria
Barbara Hammer, Bielefeld University, Germany
Eyke Hüllermeier, LMU Munich, Germany
Fabian Fumagalli, Bielefeld University, Germany
Maximilian Muschalik, LMU Munich, Germany
PROGRAM
This workshop will be held in-person at ECML PKDD 2023 at the Officine Grandi Riparazioni, Torino, Italy
Date: Friday (afternoon), September 22nd 2023.
Room: PoliTo Room 10i
Please see the workshop's program below.
VENUE (September 22nd)
OGR - Officine Grandi Riparazioni, Torino, Italy
Map View
AFFILIATION
This workshop is a result of the collaborative research center "TRR 318 - Constructing Explainability".
This workshop is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation): TRR 318/1 2021 – 438445824.