Safe Machine Learning

Specification, Robustness and Assurance

ICLR 2019 Workshop. Monday May 6th.

Room R6, Ernest N. Morial Convention Center, New Orleans.

Overview

The ultimate goal of ML research should be to have a positive impact on society and the world. As the number of applications of ML increases, it becomes more important to address a variety of safety issues; both those that already arise with today's ML systems and those that may be exacerbated in the future with more advanced systems.

Current ML algorithms tend to be brittle and opaque, reflect undesired bias in the data and often optimize for objectives that are misaligned with human preferences. We can expect many of these issues to get worse as our systems become more advanced (e.g. finding more clever ways to optimize for a misspecified objective). This workshop aims to bring together researchers in diverse areas such as reinforcement learning, formal verification, value alignment, fairness, privacy, and security to further the field of safety in machine learning.

We will focus on three broad categories of ML safety problems: specification, robustness and assurance. Specification is defining the purpose of the system. Robustness is designing the system to withstand perturbations. Assurance is monitoring, understanding and controlling system activity before and during its operation.

For more information on the research areas and about submitting, see our Call for Papers.

Invited speakers and panelists

Organizing committee

Contact: safe.ml.iclr2019@gmail.com