9:00 Welcome
9:10 Keynote talk: Rosina Weber [slides are available here]
9:40 Paper session
See What I Mean? CUE: A Cognitive Model of Understanding Explanations [slides are available here]
Can AI Explanations Make You Change Your Mind?
Who Benefits from AI Explanations? Towards Accessible and Interpretable Systems
Demystifying Black-Box Models in 2D Image Classification through Neighborhood-Based Influence
10:40 Coffee break
11:00 Paper session
PlanPilot: Efficient Navigation in Plan Space [slides are available here]
Explainable Reinforcement Learning Agents Using World Models [slides are available here]
Counterfactual Strategies for Markov Decision Processes [paper accepted in main track]
Interpreting CFD Surrogates through Sparse Autoencoders [slides are available here]
SurvTreeSHAP(t) : scalable explanation method for tree-based survival models
12:15 Lunch break (lunch not provided by IJCAI)
13:45 Paper session
Beyond Shapley Values: Cooperative Games for the Interpretation of Machine Learning Models [slides are available here]
Probing the Embedding Space of Transformers via Minimal Token Perturbations [slides are available here]
Imputation Uncertainty In Interpretable Machine Learning Methods [slides are available here]
Scaling and Robustness of Interpretable Text Classification: An Analysis of ProtoryNet [slides are available here]
TRIP: A Nonparametric Test to Diagnose Biased Feature Importance Scores
15:00 Coffee break
15:30 Fishbowl discussion
16:30 Closing