Volodymyr Kuleshov


Calibrated Uncertainty in Deep Learning

Abstract

Methods for reasoning under uncertainty are a key building block of accurate and reliable machine learning systems. However, because of model misspecification and the use of approximate inference, uncertainty estimates coming out of modern machine learning algorithms can be inaccurate -- for example, a 90% confidence interval may not contain the true outcome 90% of the time. In this talk, I will propose a simple way to improve the accuracy of uncertainty estimates using a procedure called recalibration. Recalibration is guaranteed to improve uncertainties from Bayesian or probabilistic models even when the underlying data is not distributed i.i.d. Recalibration extends widely used algorithms such as Platt scaling for SVMs and can be used in classification, regression, and structured prediction. Emprically, this procedure improves the performance of feedforward and recurrent neural networks, and is helpful when probabilistic forecasts are used to guide decisions, such as in model-based reinforcement learning.

Bio

Volodymyr is a post-doc with Stefano Ermon. He just defended his PhD from Stanford, where he worked with Serafim Batzoglou, Michael Snyder, Christopher Re, and Percy Liang. His research focuses on machine learning and its applications in genomics and personalized medicine.