Presently, it remains opaque why machine learning systems (ML) decide, predict or simply answer as they do. When an image classifier says “this is a train”, does it recognise the train or only the rails, or something totally different? How can we be sure that its decisions or predictions are based on appropriate reasons?

These problems are at the heart of at least two philosophical debates: Can we trust artificial intelligent (AI) systems? And if so, on which basis? If not, what are the limits of trust in AI? Would an explanation of the decision or prediction help our understanding and ultimately foster trust? And if so, what kind of explanation? If not, what else should we expect? These questions are equally relevant to the philosophy of AI, the ethics of AI, ML researchers themselves, as for the wider societal, political and jurisdictional debate on implementations and regulations of AI. We currently expect that this debate also needs to address what kind of practical and epistemological gain we, as pragmatic and rational beings, expect from AI and which kind of conditions for delegating certain tasks we, as a society, want to accept.

Our closing conference will be the occasion to present our work, exchange with experts in the field and other interested scholars, and engage with the wider public in a panel discussion.

                 Program

                 Registration

              Venue & Practicalities

Organisers:


Funding:

PD Dr. Eric Raidl, Dr. Saeedeh Babaii, Sara Blanco, Oliver Buchholz.


Baden-Württemberg Stiftung, 

DFG Excellence Cluster “Machine Learning: New Perspectives for Science”.