Attention-based Counterfactual Explanation for Multivariate Time Series 



Project Summary

Over the past decade, artificial intelligence (AI) and machine learning (ML) systems have made significant strides and achieved impressive success in a wide range of applications and tasks. The challenge of many state-of-art ML models is a lack of transparency and interpretability. To deal with this challenge, the EXplainable Artificial Intelligence (XAI) field has emerged. The XAI paradigm includes two main categories: feature attribution and counterfactual explanation methods. While feature attribution methods are based on explaining the reason behind a model decision, counterfactual explanation methods discover the smallest input changes that will result in a different decision. In this paper, we propose an Attention-Based Counterfactual Explanation (AB-CF), a novel model that generates intuitive post-hoc counterfactual explanations that narrow the attention to a few important segments. We validated our model using seven real-world time-series datasets from the UEA repository. Our experimental results show the superiority of AB-CF in terms of validity,  proximity, sparsity, contiguity, and efficiency compared with other competing state-of-the-art baselines.

Tutorial

Access Notebook Here

The code is also available in our GitHub repository: https://github.com/Luckilyeee/AB-CF 


download UEA dataset from the time series website: www.timeseriesclassification.com/index.php