The notion of uncertainty is of major importance in machine learning and constitutes a key element of modern machine learning methodology. In recent years, it has gained in importance due to the increasing relevance of machine learning for practical applications, many of which are coming with safety requirements. In this regard, new problems and challenges have been identified by machine learning scholars, which call for new methodological developments. Indeed, while uncertainty has long been perceived as almost synonymous with standard probability and probabilistic predictions, recent research has gone beyond traditional approaches and also leverages more general formalisms and uncertainty calculi. For example, a distinction between different sources and types of uncertainty, such as aleatoric and epistemic uncertainty, turns out to be useful in many machine learning applications. The workshop will pay specific attention to recent developments of this kind.
The goal of this small-scale workshop is to bring together researchers interested in the topic of uncertainty in machine learning. It is meant to provide a place for the discussion of the most recent developments in the modeling, processing, and quantification of uncertainty in machine learning problems, and the exploration of new research directions in this field.
The scope of the workshop covers, but is not limited to, the following topics:
adversarial examples
belief functions
calibration
classification with reject option
conformal prediction
credal classifiers
deep learning and neural networks
ensemble methods
epistemic uncertainty
imprecise probability
likelihood and fiducial inference
model selection and misspecification
multi-armed bandits
online learning
noisy data and outliers
out-of-sample prediction
performance evaluation
hypothesis testing
probabilistic methods
Bayesian machine learning
reliable prediction
set-valued prediction
uncertainty quantification