When: 14:00 pm -- 17:30 pm (ET) on Monday, December 08, 2025
Where: Rutgers University Inn and Conference Center. Address is 178 Ryders Lane, New Brunswick, NJ 08901, USA.
This tutorial explores the role of information design in studying and improving calibration for decision making. Calibration is crucial in scenarios where accurate predictions or decisions based on uncertain or incomplete information are necessary, such as in online markets, machine learning, or healthcare. In the first part, we will introduce the foundational concepts of calibration, the challenges in decision-making under uncertainty, and how information design techniques can be leveraged to study and enhance calibration. In the second part, we will delve into advanced topics, including applications of calibration in modern settings. Then there are three invited speakers who will present recent advances in calibration and information design. The tutorial aims to equip participants with a deep understanding of how to analyze and improve calibration in decision-making processes using principles from information design.
Tutorial on Information design perspective on calibration:
14:00 - 14:45: Yiding Feng, "Information Design Perspective on Calibration" (Slides Part 1)
14:45 - 15:00: Wei Tang, "Information Design Perspective on Calibration" (Slides Part 2)
15:00 - 15:30: Coffee Break
15:30 - 16:00: Wei Tang, "Information Design Perspective on Calibration" (Slides Part 2, cont'd)
Invited Speakers:
16:00 - 16:30: Princewill Okoroafor, "Calibration Measures for Downstream Decision Making"
16:30 - 17:00: Mingda Qiao, "Truthful and Decision-Theoretic Calibration Measures"
17:00 - 17:30: Yifan Wu, "Calibration Error for Decision Making"
Princewill Okoroafor (Harvard)
Title: Calibration Measures for Downstream Decision Making
Abstract:
A forecast is considered trustworthy if users can act on its predicted probabilities as though they were the true underlying distributions from which outcomes are drawn, without incurring regret. Calibration encompasses a range of formal measures that capture this notion of trustworthiness in probabilistic forecasts. In this talk, we will explore different measures of calibration, from stronger measures which ensure reliable results for the users but are computationally and statistically difficult to achieve, to weaker notions that are easier to achieve but offer less reliability. We will arrive at an intermediate notion that is achievable using no more samples or computation than the easiest statistical learning tasks, while providing guarantees for downstream decision makers that are, in many cases, as powerful as the strongest notions of calibration.
Based on joint work with Robert Kleinberg and Michael P. Kim (FOCS'25).
Mingda Qiao (UMass Amherst)
Title: Truthful and Decision-Theoretic Calibration Measures
Abstract:
Calibration measures are error metrics that quantify how much probabilistic forecasts deviate from perfect calibration. Two important desiderata for a calibration measure are its truthfulness (i.e., a forecaster approximately minimizes the error by always reporting the true probabilities) and its decision-theoretic implications (i.e., the measure upper bounds the regret of downstream decision-makers who best-respond to the forecasts).
We conduct a taxonomy of existing calibration measures. Perhaps surprisingly, we find that all of them are far from being truthful. That is, under existing calibration measures, there are simple distributions on which a polylogarithmic (or even zero) error is achievable, while truthful prediction leads to a polynomial error. We introduce a calibration measure that is truthful up to a constant multiplicative factor, as well as a different, decision-theoretic measure that is approximately truthful under smoothed analysis.
This talk is based on joint work with Letian Zheng (COLT'24), with Nika Haghtalab, Kunhe Yang and Eric Zhao (NeurIPS'24), and with Eric Zhao (COLT'25).
Yifan Wu (Microsoft Research (NE))
Title: Calibration Error for Decision Making
Abstract:
Calibration allows predictions to be reliably interpreted as probabilities by decision makers. We propose a decision-theoretic calibration error, the Calibration Decision Loss (CDL), defined as the maximum improvement in decision payoff obtained by calibrating the predictions, where the maximum is over all payoff-bounded decision tasks. Vanishing CDL guarantees the payoff loss from miscalibration vanishes simultaneously for all downstream decision tasks. We show separations between CDL and existing calibration error metrics, including the most well-studied metric Expected Calibration Error (ECE). Our main technical contribution is a new efficient algorithm for online calibration that achieves near-optimal O(logT/\sqrt{T}) expected CDL, bypassing the Ω(T^{−0.472}) lower bound for ECE by Qiao and Valiant (2021).
Yiding Feng (HKUST)
Wei Tang (CUHK)