TempXAI:
Explainable AI for Time Series and Data Streams Tutorial-Workshop

   TempXAI Workshop at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) 2024

WORKSHOP Overview

This full day workshop at ECML PKDD 2024 is an extension of last year's edition of the Explainable AI (XAI) for Time Series and Explainable Artificial Intelligence from Static to Dynamic workshops to a full-day combined joint workshop and tutorial on Explainable AI for Time series and Data Streams.

The workshop focuses on exploring the crucial intersection of Explainable AI (XAI) and the challenges posed by time series and data streams. Our primary objectives include understanding Dynamic Interpretability, delving into techniques that offer transparent insights into time-evolving data, and providing a better understanding of machine learning models in dynamic environments. We aim to advance Incremental Explainability by investigating methods that ensure interpretability remains effective as models adapt to changing data over time or methods that are able to explain these changes. Moreover, we seek to promote Real-time Decision-making by exploring applications of XAI in real-time decision-making scenarios, addressing the need for interpretable models in time-sensitive contexts. The workshop also aims to share practical insights by encouraging the sharing of novel XAI tools that are specific to time series and data streams, in addition to case studies and practical implementations in employing interpretable machine learning for time series and data streams. The TempXAI workshop welcomes papers that cover, but are not limited to, one or several of the following topics:


KEYNOTE SPEAKER

Riccardo Guidotti

University of Pisa

Explanation Methods for Sequential Data Model - From Post-hoc to Interpretable-by-design Approaches for Time Series Classification


The increasing availability of high-dimensional time series data, such as electrocardiograms, stock indices, and motion sensors, has led to the widespread use of time series classifiers in critical fields like healthcare, finance, and transportation. However, the complexity of these models often makes them black boxes, hindering interpretability. In high-stakes domains, explaining a model’s decisions is vital for trust and accountability. Effective eXplainable AI (XAI) methods for sequential data are essential for providing insights and reinforcing expert decision-making. This presentation addresses the challenge of explaining sequential data models, focusing on time series classification. We begin by reviewing the current literature on XAI for time series classification. Then, we present a series of works that illustrate the transition from general-purpose post-hoc explanation approaches to interpretable-by-design methods. First, we introduce a local post-hoc agnostic subsequence-based time series explainer that can be used to elucidate the predictions of any time series classifier. Next, we demonstrate, through a real case study on car crash prediction, how insights from a post-hoc explainer were crucial in developing an effective interpretable-by-design method. Additionally, we showcase an interpretable subsequence-based classifier by enhancing SAX with dilation and stride to capture temporal patterns effectively. Finally, we explore the use of subsequence-based approaches in other sequential domains like mobility trajectories and text.

Advancing Learning with Temporal Knowledge Graphs


Relational machine learning has become a pivotal area of study, focusing on the analysis of relational data represented in graph structures, such as knowledge graphs (KGs). These graphs capture complex relationships between entities and are crucial in domains like social networks, recommender systems, and computational finance. Recently, the focus has shifted towards temporal knowledge graphs (tKGs), which integrate the temporal aspect of data, allowing for the modeling of evolving relationships over time. In this talk, I will introduce our innovative approaches to learning with tKGs, emphasizing the importance of interpretability and dynamic modeling. One of our key contributions is a framework that combines graph representation learning with temporal reasoning, enabling accurate predictions and providing clear and understandable explanations for these predictions. This is achieved through mechanisms that focus on relevant subgraphs and dynamic relationships, offering insights into the underlying processes. Another contribution TLogic provides a novel framework that leverages temporal logical rules to enhance the explainability and robustness of predictions on tKGs. TLogic provides a structured way to interpret the temporal dynamics within the data, ensuring that the predictions are consistent with the temporal context. This approach highlights the potential for generalizing learned knowledge across different datasets, showcasing the flexibility and power of temporal knowledge graphs in advancing the field of relational machine learning.

Yunpu Ma

LMU Munich

Organizers

Zahraa S. Abdallah

University of Bristol

Fabian Fumagalli

Bielefeld University

Barbara Hammer

Bielefeld University

Eyke Hüllermeier

LMU Munich

Matthias Jakobs

TU Dortmund University

Maximilian Muschalik

LMU Munich

Emmanuel Müller

TU Dortmund University

Panagiotis Papapetrou

Stockholm University

Amal Saadallah

TU Dortmund University

George Tzagkarakis

Foundation for Research and Technology – Hellas, Institute of Computer Science (FORTH-ICS)

Program Committee

Reviewers:

Program

This workshop will be held in-person at ECML PKDD 2024 at the Radisson Blu Hotel, Vilnius, Lithuania 

Date: September 9th 2024
Room: tba.

VENUE (September 9)

Radisson Blu Hotel, Vilnius, Lithuania

Map View

SPONSORS

This workshop is supported by the Federal Ministry of Education and Research of Germany and the state of North Rhine-Westphalia as part of the Lamarr Institute for Machine Learning and Artificial Intelligence.

This workshop is a result of the collaborative research center "TRR 318 - Constructing Explainability". This workshop is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation): TRR 318/1 2021 – 438445824.