Tutorial on Uncertainty Estimation for Natural Language Processing

at COLING 2022

Overview

Accurate estimates of uncertainty are important for many difficult or sensitive prediction tasks in natural language processing (NLP). Though large-scale pre-trained models have vastly improved the accuracy of applied machine learning models throughout the field, there still are many instances in which they fail. The ability to precisely quantify uncertainty while handling the challenging scenarios that modern models can face when deployed in the real world is critical for reliable, consequential-decision making. This tutorial is intended for both academic researchers and industry practitioners alike, and provides a comprehensive introduction to uncertainty estimation for NLP problems---from fundamentals in probability calibration, Bayesian inference, and confidence set (or interval) construction, to applied topics in modern out-of-distribution detection and selective inference.

Speakers

Google Research

Twitter

Outline

(1) Introduction


(2) Probability Calibration


(3) Bayesian Approaches

(4) Conformal Prediction


(5) Selective Prediction & OOD Detection


(6) Conclusion

Slides

Tutorial: Uncertainty Estimation for Natural Language Processing (final)

Citation

@inproceedings{51177,

title = {Uncertainty Estimation for Natural Language Processing},

author = {Adam Fisch and Robin Jia and Tal Schuster},

year = {2022},

booktitle = {COLING}

}