Fabrizio Silvestri

DIAG, La Sapienza University, Rome

February 17, 2023

Counterfactual Explanations of (some) ML Models

Abstract: The opaque reasoning of Neural Networks induces a lack of human trust. For this reason, there is a rising interest in methods that make the outcomes of prediction machines (ML models) less opaque. This talk will review some advances we have made in the last 5 years on eXplainable AI (XAI), particularly in producing counterfactual explanations to ML-based predictions. We will introduce the problem and review one of the first pieces of research we produced on the topic. We will then be showing some research we have conducted on how to produce counterfactual explanations i) on Graph Neural Networks, ii) by using Reinforcement Learning, and iii) by using Neuro Symbolic AI.

<      >