Program

This is a tentative program of the workshop. It will evolve as we gather more information.

The tutorial taking place in the morning provides an overview of traditional approaches to handling uncertainty in machine learning as well as more recent developments. In this regard, the tutorial will specifically focus on the distinction between aleatoric and epistemic uncertainty in the common setting of supervised learning. The tutorial will be largely based on the recent survey paper available at https://arxiv.org/abs/1910.09457.

The slides of the tutorial are available here!

Recordings of the first part are available here and of the second part here!

A (pre-recorded) video of the invited talk is available here!

9 am - 10 am: tutorial on uncertainty handling in machine learning - part 1

Part 1 of the tutorial - details to be announced

10 am - 10.30 am: break

Coffee break (bring your own coffee)

10.30 - 11.30 am: tutorial on uncertainty handling in machine learning - part 2

Part 2 of the tutorial - details to be announced

11.30 am - 12 am: break

Coffee break (bring your own coffee)

12 am - 1 pm : Invited talk, Meelis Kull, "Do we need to estimate the calibration loss of probabilistic classifiers? If yes, then how?"

Ensuring that the classifiers report well-calibrated class probabilities is a task attracting increased research efforts. This is mostly due to the over-confidence of deep neural networks which harms cost-sensitive decision making and other downstream applications. By definition, being well-calibrated means near-zero calibration loss as determined by the decomposition of any proper loss, such as cross-entropy or Brier score. Common measures of classifier calibration are ECE (expected calibration error) and its many variants. The talk will first provide a brief self-contained introduction to the above concepts with new intuitive visualizations of the loss decompositions. After that, an argument is developed about whether we should worry less about getting well-calibrated models and pay more attention to the proper losses tailored to the particular application domain by taking into account the misclassification cost uncertainty. For cases where the direct estimation of calibration loss is unavoidable, the limitations of ECE-like measures will be discussed.

2pm - 4 pm: paper presentation - session 1

4.20 pm - 5 pm: paper presentation - session 2

5.10 pm - 6.50 pm: paper presentation - session 3

List of accepted papers:

  • On Calibrated Model Uncertainty in Deep Learning by Biraja Ghoshal and Allan Tucker (Brunel University, London UK). Video here.

  • Undecided Voters as Set-Valued Information - Machine Learning Approaches under Complex Uncertainty by Dominik Kreiss, Malte Nalenz and Thomas Augustin (LMU Munich). Video here.

  • Cost-sensitive classification with uncertain costs by Viacheslav Komisarenko and Meelis Kull (University of Tartu) . Video here.

  • Time-Dynamic Estimates of the Reliability of Deep Semantic Segmentation Networks by Kira Maag, Matthias Rottmann and Hanno Gottschalk (University of Wuppertal). Video here.

  • Adjusting Decision Trees for Uncertain Class Proportions by Cyprien Gilet, Marie Guyomard, Susana Barbosa and Lionel Fillatre (University Côte d’Azur, CNRS). Video here.

  • A first glance at multi-label chaining using imprecise probabilities by Yonatan Carlos Carranza Alarcon and Sébastien Destercke (Université de Technologie de Compiègne, CNRS.

  • Ensemble-based Uncertainty Quantification: Bayesian versus Credal Inference by Mohammad Hossein Shaker and Eyke Hüllermeier (Paderborn University).

  • Indeterminacy in classification: does the method matter? by Juliette Ortholand, Sébastien Destercke and Khaled Belahcene (Université de Technologie de Compiègne, CNRS). Video here.

  • Towards operational application of Deep Reinforcement Learning to Earth Observation satellite scheduling by Adrien Hadj-Salah (IRT Saint-Exupéry), Jonathan Guerra (IRT Saint-Exupéry), Mathieu Picard (Airbus) and Mikaël Capelle (IRT Saint-Exupéry). Video here.

  • Towards Robust Classification with Generative Forests by Alvaro Henrique Chaim Correia, Robert Peharz and Cassio de Campos (Eindhoven University of Technology). Video here.

  • Investigating maximum likelihood based training of infinite mixtures for uncertainty quantification by Sina Däubener and Asja Fischer (Ruhr University Bochum). Video here.

  • Using Subjective Logic to Estimate Uncertainty in Multi-Armed Bandit Problems by Fabio Massimo Zennaro and Audun Jøsang (The University of Oslo). Video here.

  • A Bayesian Neural Network based on Dropout Regulation by Claire Theobald (Université de Lorraine, CNRS, Inria, LORIA), Frédéric Pennerath (CentraleSupélec, CNRS, LORIA), Brieuc Conan-Guez (Université de Lorraine, CNRS, Inria, LORIA), Miguel Couceiro (Université de Lorraine, CNRS, Inria, LORIA) and Amedeo Napoli (Université de Lorraine, CNRS, Inria, LORIA). Video here.

  • Towards a robust and consistent estimation of a vehicle's mass by Mathieu Randon (Université de Technologie de Compiègne, CNRS), Benjamin Quost (Université de Technologie de Compiègne, CNRS), Nassim Boudaoud (Université de Technologie de Compiègne) and Dirk von Wissel (Renault SAS). Video here.

  • Classification of Uncertain Time Series by Propagating Uncertainty in Shapelet Transform by Michael Franklin Mbouopda and Engelbert Mephu Nguifo (University Clermont Auvergne - LIMOS - CNRS). Video here.