The use of machine learning has become pervasive in our society, from specialized scientific data analysis to industry intelligence and practical applications with a direct impact in the public domain. This impact involves different social issues including privacy, ethics, liability and accountability.
In the way of example, European Union legislation, resulting in the General Data Protection Regulation (trans-national law) passed in early 2016, will go into effect in April 2018. It includes an article on "Automated individual decision making, including profiling" that, in fact, establishes a policy on the right of citizens to receive an explanation for algorithmic decisions that may affect them. This could jeopardize the use of any machine learning method that is not comprehensible and interpretable at least in applications that affect the individual.
This situation may affect safety critical environments in particular and puts model interpretability at the forefront as a key concern for the machine learning community. In such context, this workshop aims to discuss the use of machine learning in safety critical environments, with special emphasis on three main application domains:
We aim to answer some of these questions: How do we make our models more comprehensible and transparent? Shall we always trust our decision making process? How do we involve field experts in the process of making machine learning pipelines more practically interpretable from the viewpoint of the application domain?