Transparent and Interpretable Machine Learning in

Safety Critical Environments

NIPS Workshop - Long Beach Convention Center

Friday, December the 8th 2017


The use of machine learning has become pervasive in our society, from specialized scientific data analysis to industry intelligence and practical applications with a direct impact in the public domain. This impact involves different social issues including privacy, ethics, liability and accountability.

In the way of example, European Union legislation, resulting in the General Data Protection Regulation (trans-national law) passed in early 2016, will go into effect in April 2018. It includes an article on "Automated individual decision making, including profiling" that, in fact, establishes a policy on the right of citizens to receive an explanation for algorithmic decisions that may affect them. This could jeopardize the use of any machine learning method that is not comprehensible and interpretable at least in applications that affect the individual.

This situation may affect safety critical environments in particular and puts model interpretability at the forefront as a key concern for the machine learning community. In such context, this workshop aims to discuss the use of machine learning in safety critical environments, with special emphasis on three main application domains:

  • Healthcare
    • Decision making (diagnosis, prognosis) in life-threatening conditions
    • Integration of medical experts knowledge in machine learning-based medical decision support systems
    • Critical care and intensive care units
  • Autonomous systems
    • Mobile robots, including autonomous vehicles, in human-crowded environments.
    • Human safety when collaborating with industrial robots.
    • Ethics in robotics and responsible robotics
  • Complainants and liability in data driven industries
    • Prevent unintended and harmful behaviour in machine learning systems
    • Machine learning and the right to an explanation in algorithmic decisions
    • Privacy and anonymity vs. interpretability in automated individual decision making

We aim to answer some of these questions: How do we make our models more comprehensible and transparent? Shall we always trust our decision making process? How do we involve field experts in the process of making machine learning pipelines more practically interpretable from the viewpoint of the application domain?


  • FINALE DOSHI-VELEZ - Assistant Professor of Computer Science, Harvard
  • BARBARA HAMMER - Professor oat CITEC centre of excellence, Bielefeld University
  • SUCHI SARIA - Assistant Professor, Johns Hopkins University
  • DARIO AMODEI - Research Scientist, OpenAI
  • ADRIAN WELLER - Computational and Biological Learning Lab University of Cambridge and Alan Turing Institute


Alessandra Tosi

Mind Foundry

Alfredo Vellido

Universitat Polit├Ęcnica de Catalunya, UPC BarcelonaTech

Mauricio Alvarez

University of Sheffield