Distribution-free methods enable rigorous uncertainty quantification with any (misspecified) model and (unknown) data distribution.
Accuracy alone does not suffice for reliable, consequential decision-making; we also need uncertainty.
Distribution-free UQ gives finite-sample statistical guarantees for any predictive model, no matter how bad/misspecified, and any data distribution, even if unknown.
DF techniques such as conformal prediction represent a new, principled approach to UQ for complex prediction systems, such as deep learning.
This workshop will bridge applied machine learning and distribution-free uncertainty quantification, catalyzing work at this interface.
University of Chicago
Stanford University
Yale University
University of California, Berkeley
Carnegie Mellon University
Royal Holloway University
Carnegie Mellon University
Cornell University
Distribution-free methods make minimal assumptions about the data distribution or model, yet still provide uncertainty quantification. Examples of DF methods include conformal prediction, tolerance regions, risk-controlling prediction sets, calibration by binning, and more. We take a broad outlook on DF methods, so any assumption-light uncertainty quantification approaches are welcome.