Saturday, July 24

Workshop on Distribution-Free Uncertainty Quantification

at ICML 2021

View the full recorded workshop here.

Distribution-free methods enable rigorous uncertainty quantification with any (misspecified) model and (unknown) data distribution.

Accuracy alone does not suffice for reliable, consequential decision-making; we also need uncertainty.

Distribution-free UQ gives finite-sample statistical guarantees for any predictive model, no matter how bad/misspecified, and any data distribution, even if unknown.

DF techniques such as conformal prediction represent a new, principled approach to UQ for complex prediction systems, such as deep learning.

This workshop will bridge applied machine learning and distribution-free uncertainty quantification, catalyzing work at this interface.

This workshop is an inclusive, beginner-friendly introduction to distribution-free UQ. We encourage everybody to submit a paper. Spotlight talks and posters presentations will be selected from the submissions.

Speakers and Panelists

University of Chicago

Emmanuel J. Candès

Stanford University

Yale University

University of California, Berkeley

Carnegie Mellon University

Royal Holloway University

Carnegie Mellon University

Cornell University

What is distribution-free uncertainty quantification?

Distribution-free methods make minimal assumptions about the data distribution or model, yet still provide uncertainty quantification. Examples of DF methods include conformal prediction, tolerance regions, risk-controlling prediction sets, calibration by binning, and more. We take a broad outlook on DF methods, so any assumption-light uncertainty quantification approaches are welcome.