AITE

project

Image by Alan Warburton / © BBC / Better Images of AI / Plant / CC-BY 4.0

Presently, it remains opaque why machine learning systems (ML) decide or answer as they do. When an image classifier says "this is a train", does it 'recognise' the train or only the rails, or something totally different? How can we be sure that it decides as it does for the right reasons? This problem is at the heart of at least two debates: Can we trust artificial intelligent (AI) systems? And if so, on which basis? Would an explanation of the decision help our understanding and ultimately foster trust? And if so, what kind of explanation? These are the central questions, we want to address in this project.

The project divides into three interrelated subprojects. In Subproject 1, we formulate epistemological and scientific norms on explanation to put constraints on explainable AI (XAI). In Subproject 2, we investigate moral norms of XAI, based on a classification of morally loaded cases of algorithmic decision-making. In Subproject 3, we analyse the notion of “trust” in AI systems and its relation to explainability. The three projects will collaborate to establish ethical, epistemological and scientific standards for trustworthy/reliable AI and XAI. We thereby contribute to the need to develop concrete action proposals. This will also be of interest to AI-engineers or institutions deploying AI-systems.

The project is a joint endeavour of the “Ethics and Philosophy Lab” (EPL) of the DFG Cluster of Excellence “Machine Learning: New Perspectives for Science” (ML-Cluster) and the “International Centre for Ethics in the Sciences and Humanities” (IZEW) at the University of Tübingen.