Online Link: https://monash.zoom.us/j/89823239283?pwd=oxSHjP3b7WqSEfDux2T8iXJCtsRoa9.1
07:00 - 7:10
Tobias Huber, Augsburg University
XAI workshop chair
7:10 - 8:25
7:10 Kary Framing
Contextual Importance and Utility in Python: New Functionality and Insights with the Py-Ciu Package
7:35 Yong Zhao
ONNXExplainer: an ONNX Based Generic Framework to Explain Neural Networks Using Shapley Values
8:00 Uri Menkes
"You Just Can't go Around Killing People" - Explaining Agent Behaviour to a Human Terminator
8:25 - 9:00
9:10 - 10:25
9:10 Jin-Jian Xu
XGeoS-AI: An Interpretable Learning Framework for Deciphering Geoscience Image Segmentation
9:25 Hyeonggeun Yun
Interaction as Explanation: A User Interaction-based Method for Explaining Image Classification Models
9:50 Antonio Serino
Augmenting XAI with LLMs: A Case Study in Banking Marketing Recommendations
Online Link: https://monash.zoom.us/j/85068492117?pwd=Qr69jnrqqRb9iCOLlamkounULb5Ixu.1
03:00 - 3:10
Mor Vered, Monash University
XAI workshop chair
3:10 - 4:25
3:10 Kunal Rathore
Generating Part-Based Global Explanations vis Correspondence
3:35 Nicholas Kresting
A Harmonic Metric for LLM Trustworthiness
4:00 Ximing Wen
The Impact of an XAI-Augmented Approach on Binary Classification with Scarce Data
4:25 - 5:00
5:00 - 6:15
5:00 Xinyu Zhang
Challenges in Interpretability of Additive Models
5:25 Hilarie Sit
Improving Explainability of Softmax Classifiers Using a Prototype-Based Joint Embedding Method
5:50 Belona Sonna
Can Unfairness in ML Decision-Making Processes be Assessed Through the Lens of Formal Explanations?
6:15 - 6: 25
Hendrik Baier, Eindhoven University of Technology
XAI Workshop Chair