TempXAI:
Explainable AI for Time Series and Data Streams Tutorial-Workshop
TempXAI Workshop at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) 2024
WORKSHOP Overview
This full day workshop at ECML PKDD 2024 is an extension of last year's edition of the Explainable AI (XAI) for Time Series and Explainable Artificial Intelligence from Static to Dynamic workshops to a full-day combined joint workshop and tutorial on Explainable AI for Time series and Data Streams.
The workshop focuses on exploring the crucial intersection of Explainable AI (XAI) and the challenges posed by time series and data streams. Our primary objectives include understanding Dynamic Interpretability, delving into techniques that offer transparent insights into time-evolving data, and providing a better understanding of machine learning models in dynamic environments. We aim to advance Incremental Explainability by investigating methods that ensure interpretability remains effective as models adapt to changing data over time or methods that are able to explain these changes. Moreover, we seek to promote Real-time Decision-making by exploring applications of XAI in real-time decision-making scenarios, addressing the need for interpretable models in time-sensitive contexts. The workshop also aims to share practical insights by encouraging the sharing of novel XAI tools that are specific to time series and data streams, in addition to case studies and practical implementations in employing interpretable machine learning for time series and data streams. The TempXAI workshop welcomes papers that cover, but are not limited to, one or several of the following topics:
Explainable AI methods for time series modeling
Explainable AI methods for data streams and models in flux
Interpretable machine learning algorithms for time series and data streams
Explainable deep learning for time series and data stream modeling
Explainable concept drift detection in time series and data streams
Explainable anomaly detection in time series or data streams
Explainable pattern discovery and recognition in time series
Explainability methods for multivariate time series
Explainable time series features engineering
Explainable aggregation of time series
Integration of domain knowledge in time series modeling
Explainability for continual learning and domain adaptation
Visual explanations for (long) temporal data
Causality; Stochastic process modeling
Explainability metrics and evaluation, including benchmark time series and streaming datasets
Case studies and applications of explainable artificial intelligence for time series or data streams
Regulatory compliance and ethics
KEYNOTE SPEAKER
Riccardo Guidotti
University of Pisa
Explanation Methods for Sequential Data Model - From Post-hoc to Interpretable-by-design Approaches for Time Series Classification
The increasing availability of high-dimensional time series data, such as electrocardiograms, stock indices, and motion sensors, has led to the widespread use of time series classifiers in critical fields like healthcare, finance, and transportation. However, the complexity of these models often makes them black boxes, hindering interpretability. In high-stakes domains, explaining a model’s decisions is vital for trust and accountability. Effective eXplainable AI (XAI) methods for sequential data are essential for providing insights and reinforcing expert decision-making. This presentation addresses the challenge of explaining sequential data models, focusing on time series classification. We begin by reviewing the current literature on XAI for time series classification. Then, we present a series of works that illustrate the transition from general-purpose post-hoc explanation approaches to interpretable-by-design methods. First, we introduce a local post-hoc agnostic subsequence-based time series explainer that can be used to elucidate the predictions of any time series classifier. Next, we demonstrate, through a real case study on car crash prediction, how insights from a post-hoc explainer were crucial in developing an effective interpretable-by-design method. Additionally, we showcase an interpretable subsequence-based classifier by enhancing SAX with dilation and stride to capture temporal patterns effectively. Finally, we explore the use of subsequence-based approaches in other sequential domains like mobility trajectories and text.
Organizers
Zahraa S. Abdallah
University of Bristol
Fabian Fumagalli
Bielefeld University
Barbara Hammer
Bielefeld University
Eyke Hüllermeier
LMU Munich
Matthias Jakobs
TU Dortmund University
Maximilian Muschalik
LMU Munich
Emmanuel Müller
TU Dortmund University
Panagiotis Papapetrou
Stockholm University
Amal Saadallah
TU Dortmund University
George Tzagkarakis
Foundation for Research and Technology – Hellas, Institute of Computer Science (FORTH-ICS)
Program Committee (Preliminary)
Telmo de Menezes e Silva Filho (University of Bristol)
Raul Santos-Rodriguez (University of Bristol)
Georgiana Ifrim (University College Dublin)
Christoph Bergmeir (University of Granada)
Mahsa Salehi (Monash University)
Grigorios Tsagkatakis (University of Crete)
Chiara Balestra (TU Dortmund University)
Bin Li (TU Dortmund University)
Raphael Fischer (TU Dortmund University)
Sebastian Buschjäger (TU Dortmund University)
Maja Schneider (TU Munich)
Jacopo De Stefani (TU Delft)
Udo Schlegel (Universität Konstanz)
Andreas Theissler (Aalen University)
Daniel A. Keim (Universität Konstanz)
Mario Refoyo (Universidad Politecnica de Madrid)
David Luengo (Universidad Politecnica de Madrid)
Stefan Heid (LMU Munich)
Jonas Hanselle (LMU Munich)
Andrea Cossu (University of Pisa)
Bardh Prenkaj (Sapienza University of Rome)
Emmanouil Manolis (University of the Bundeswehr Munich)
André Artelt (Bielefeld University)
Valerie Vaquet (Bielefeld University)
Fabian Hinder (Bielefeld University)
Program
This workshop will be held in-person at ECML PKDD 2024 at the Radisson Blu Hotel, Vilnius, Lithuania
Date: tba.
Room: tba.
VENUE (September 9-13)
Radisson Blu Hotel, Vilnius, Lithuania
Map View
SPONSORS
This workshop is supported by the Federal Ministry of Education and Research of Germany and the state of North Rhine-Westphalia as part of the Lamarr Institute for Machine Learning and Artificial Intelligence.
This workshop is a result of the collaborative research center "TRR 318 - Constructing Explainability". This workshop is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation): TRR 318/1 2021 – 438445824.