Explainable AI:Science and technology for the eXplanation of AI decision making

Martedì 17 Novembre 2020 - ore 14.30

Fosca Giannotti

Knowledge Discovery and Data Mining Laboratory

ISTI - CNR

Diretta Youtube

SLIDE

AI systems for (automated) decision making, are often black boxes that classify or score objects without explaining why. This is problematic not only for lack of transparency, which undermines trust, but also for possible biases inherited by the learning algorithms from human prejudices and collection artefacts hidden in the training data, which may lead to unfair or wrong decisions. That's why researchers strive for solutions to construct meaningful explanations of opaque AI systems, or better, to design AI systems that are, by-design, intelligible by people with diverse background and expertise. The talk will focus on the open research of how to design effective explanatory interfaces to AI systems (post-hoc or by-design); explainable AI is the basic building brick for preserving and expanding human autonomy, supporting conversations between people and machines and helping humans make better decisions in critical domains such as health, justice and policy. The seminar will overview current proposals and open research questions.