ICML 2019 Workshop on

Uncertainty & Robustness in Deep Learning

Friday June 14, Long Beach, California, USA

Room: Hall B

There has been growing interest in making deep neural networks robust for real-world applications. Challenges arise when models receive inputs drawn from outside the training distribution. For example, a neural network tasked with classifying handwritten digits may assign high confidence predictions to cat images. Anomalies are frequently encountered when deploying ML models in the real world. Generalization to unseen and worst-case inputs is also essential for robustness to distributional shift. Well-calibrated predictive uncertainty estimates are indispensable for many machine learning applications, such as self-driving cars and medical diagnosis systems. In order to have ML models reliably predict in open environment, we must deepen technical understanding in the following areas:

  1. Learning algorithms that are robust to changes in input data distribution (e.g., detect out-of-distribution examples),
  2. Mechanisms to estimate and calibrate confidence produced by neural networks,
  3. Methods to improve robustness to adversarial and non-adversarial corruptions, and
  4. Key applications for uncertainty (e.g., computer vision, robotics, self-driving cars, medical imaging) as well as broader machine learning tasks.

This workshop will bring together researchers and practitioners from the machine learning communities, and highlight recent work that contribute to address these challenges. Our agenda will feature contributed papers with invited speakers. Through the workshop we hope to help identify fundamentally important directions on robust and reliable deep learning, and foster future collaborations. We invite the submission of papers on topics including, but not limited to:

  • Out-of-distribution detection and anomaly detection
  • Robustness to corruptions, adversarial perturbations, and distribution shift
  • Calibration
  • Probabilistic (Bayesian and non-Bayesian) neural networks
  • Open world recognition and open set learning
  • Security
  • Quantifying different types of uncertainty (known unknowns and unknown unknowns) and types of robustness
  • Applications of robust and uncertainty-aware deep learning

Please see the call for papers for formatting instructions and deadlines.

Invited Speakers


Yixuan (Sharon) Li

Incoming Assistant Professor, University of Wisconsin Madison

Balaji Lakshminarayanan

Senior Research Scientist, Deep Mind

Dan Hendrycks

PhD student, UC Berkeley

Thomas Dietterich

Professor, Oregon State University

Justin Gilmer

Researcher, Google Brain


Travel awards are kindly sponsored by Google and DeepMind.