ICML 2021 Workshop on

Uncertainty & Robustness in Deep Learning

July 23 or 24 (TBD), 2021

Note: The UDL workshop will take place virtually this year.


There has been growing interest in ensuring that deep learning systems are robust and reliable. Challenges arise when models receive samples drawn from outside the training distribution. For example, a neural network tasked with classifying handwritten digits may assign high confidence predictions to cat images. Anomalies are frequently encountered when deploying ML models in the real world. Well-calibrated predictive uncertainty estimates are indispensable for many machine learning applications, such as self-driving vehicles and medical diagnosis systems. Generalization to unseen and worst-case inputs is also essential for robustness to distributional shift. In order to have ML models safely deployed in open environments, we must deepen technical understanding in the following areas of:

  • Learning algorithms that can detect changes in data distribution (e.g. out-of-distribution examples) and improve out-of-distribution generalization (e.g. temporal, geographical, hardware, adversarial shifts);

  • Mechanisms to estimate and calibrate confidence produced by neural networks in typical and unforeseen scenarios;

  • Guide learning towards an understanding of the underlying causal mechanisms for improving robustness and generalization and enforcing distributional invariances.

In order to achieve these goals, it is critical to dedicate substantial effort on

  • Creating benchmark datasets and protocols for evaluating model performance under distribution shift

  • Studying key applications of robust and uncertainty-aware deep learning (e.g., computer vision, robotics, self-driving vehicles, medical imaging), as well as broader machine learning tasks.

This workshop will bring together researchers and practitioners from the machine learning communities, and highlight recent work that contributes to addressing these challenges. Our agenda will feature contributed papers with invited speakers. Through the workshop we hope to help identify fundamentally important directions on robust and reliable deep learning, and foster future collaborations.

Topics of interest include but are not limited to:

• Model uncertainty estimation and calibration

• Probabilistic (Bayesian and non-Bayesian) neural networks

• Anomaly detection and out-of-distribution detection

• Robustness to distribution shift and out-of-distribution generalization

• Model misspecification

• Quantifying different types of uncertainty (model uncertainty, data uncertainty, contextual anomalies)

• Open world recognition and open set learning

• Connections between out-of-distribution generalization and adversarial robustness

• New datasets and protocols for evaluating uncertainty and robustness

Please see the call for papers for formatting instructions and deadlines.

Invited Speakers

Google Brain

Salesforce Research

Stanford University

Korea Advanced Institute of Science and Technology


Balaji Lakshminarayanan

Research Scientist, Google Brain

Dan Hendrycks

PhD student,UC Berkeley

Sharon Yixuan Li

Assistant Professor, University of Wisconsin-Madison

Jasper Snoek

Research Scientist, Google Brain

Silvia Chiappa

Research Scientist, DeepMindELLIS Member

Thomas Dietterich

Professor, Oregon State University

Sebastian Nowozin

Research Scientist, Microsoft ResearchELLIS Member