This tutorial will enable participants to gain knowledge about topics ranging from the very fundamentals of uncertainty quantification to cutting-edge recent methods in the field. We expect tutorial attendees to develop:
• A solid understanding of the basic concepts around uncertainty quantification in machine learning and of a taxonomical map of the field.
• The ability to discriminate between different sources of uncertainty.
• Mathematical fundamentals for understanding and implementing several well-established uncertainty quantification techniques.
• Knowledge of the tasks of robustness to and detection of Out-of-Distribution data and Model Calibration.
• Modeling and leveraging multiple annotations for uncertainty quantification.
• Knowledge of quantification beyond classification or segmentation, with basics on conformal predictions.
Problem statement: sources of errors and uncertainty, distributional shift
Overview of passive and active solutions
Taxonomy, common confusions, ambiguities and misunderstandings
Bayesian perspective on uncertianty quantification (UQ)
UQ methods in medical image analysis
Practical hands-on session
Applications of UQ for real-wold tasks
Calibration: what, when and why?
Visualising and measuring calibration
Improving calibration
Practical hands-on session
leads the research group for Machine Learning in Medical Image Analysis at the University of Tübingen. He previously worked at ETH Zürich, Imperial College London and King’s College London. His research focuses on developing methodologies to bridge the gap between ML theory and clinical applications. He has a special interest in safety and uncertainty modeling for medical image analysis, and he is an active member of the MICCAI community, having co-founded the UNSURE workshop.
is a machine learning researcher at University of California, Berkeley, working under the supervision of Michael I. Jordan and Jitendra Malik. He works on theoretical aspects of machine learning with applications in vision and healthcare, with a focus on applying modern statistical ideas to increase robustness of black-box models like deep neural networks.
8.00-8:30 Part 1: Introduction to Uncertainty, Distribution Shifts and Robustness
8:30-10:00 Part 2: Uncertainty Quantification Techniques and Hands-on
10:00-10:30 Coffee break
10:30 -11:10 Part 3: Model Calibration Techniques and Hands-on
11:10 -- 11:50 Keynote 1: Uncertainty Quantification in Segmentation & Image Reconstruction
11:50 -- 12:30 Keynote 2: A Gentle Introduction to Conformal Prediction
This tutorial does not require any prior skills in uncertainty or model calibration,
however some background in math and Python programming may be needed.
All the materials including presentations and code are available at our GitHub.
In case of questions about the tutorial, please contact:
mara(dot)graziani(at)hevs(dot)ch
nataliia(dot)molchanova(at)unil(dot)ch
meritxell(dot)bachcuadra(at)unil(dot)ch
UPF Barcelona,
Univ. of Adelaide
Isomorphic Labs
IBM Research Europe, Hes-so Valais
CIBM, Univ. Lausanne
Univ. Lausanne, Hes-so Valais
Univ. of Cambridge
Sycai Medical
Univ. Surrey,
Univ. Adelaide
UPF/ICREA Barcelona