Epistemic Uncertainty in Artificial Intelligence 

| E-pi UAI 2023 |

A New Paradigm on Artificial Intelligence using Epistemic Uncertainty - E-pi UAI 2023 Workshop 

Friday, August 4, Pittsburgh, USA

News Updates

Important Dates

For more information regarding paper submission, registration, and workshop schedule, please refer to Calls, Registration, and Schedule, respectively.                                                                   

Call for Papers

The E-pi UAI 2023 aims to raise awareness around the modelling of Epistemic Uncertainty in AI/ML, a rapidly emerging topic in the AI community represented at UAI. As we aim for the broadest possible involvement, we plan to invite the submission of peer-reviewed papers covering the following topics: 


The list is not exhaustive as we wish to mobilize the whole community around this theme.

We encourage you to freely submit your papers to other venues of your choice, including any preliminary work you may have. Our primary aim is to showcase advancements in the field of uncertainty in machine learning. We eagerly seek to highlight novel contributions and cutting-edge research in this area. 

We will award prizes for the best paper and the best student paper, and authors submitting original work will be invited to consider publication in a proceedings volume, subject to attracting a sufficient number of high-quality papers. The submission is non-archival, we will seek the consent of the authors before the proceedings.


Abstract

Machine learning models have shown their performance in safety-critical applications like autonomous driving and healthcare. However, in the presence of uncertainty, model predictions might be suboptimal or even incorrect. This highlights the need for learning systems to be equipped with the means to determine a model’s confidence in its prediction. Hence, calibrated uncertainty estimation and modelling techniques are the key factors in the predictive process where the challenge is ‘to know when a model does not know’. Modelling epistemic uncertainty, the kind of uncertainty that arises from the lack of data (information or knowledge) in terms of both quantity and quality, is a critical step for any machine-learning model. Epistemic AI’s principle is that AI should first and foremost learn from the data it cannot see. This translates into learning sets of hypotheses compatible with the available data. With the E-pi UAI 2023 workshop, we seek to attract the machine learning research community to contribute novel solutions towards modelling epistemic uncertainty in AI, including probabilistic and non-probabilistic techniques. 


Since the machine learning topic has been recently the object of much attention in many domains and communities, we have started to break entirely with the current state of artificial intelligence and with the most exciting ongoing efforts, such as continual learning (making the learning process a life-long endeavour), multi-task learning (aiming to distil knowledge from multiple tasks to solve a different problem) or meta-learning (learning to learn). It has been mainly approached from the point of view of preventing the model's low accuracy and resulting in non-robust outcomes in the light of choosing the best uncertainty models. As these are all still firmly rooted in AI’s conventional principles, they fail to recognise the foundational issue that the discipline has with the representation of uncertain knowledge. Epistemic AI (E-pi) goes beyond ‘human-centric’ AI, the push to make artificial constructs more trustable by human beings and more capable of understanding humans since it strives to model the uncertainty stemming not just from human behaviour, but from all sources of uncertainty present in complex environments. After a model has been trained via the data, the model result deviates from the original one by just having new data. The unspoken assumption there is that we are quite satisfied with the model (accuracy) and results we have, whilst we wish to extend its capabilities to new settings and data.

What is Epistemic AI?

Epistemic AI aims to create a new paradigm for next-generation artificial intelligence providing worst-case guarantees on its predictions, thanks to proper modelling of real-world uncertainties. This involves formulating a new mathematical framework for optimisation under epistemic uncertainty superseding existing probabilistic approaches. The latter will lay the premises for the creation of new ‘epistemic’ learning paradigms able to revisit the foundations of artificial intelligence. In Epistemic AI we will focus, in particular, on some of the most important areas of machine learning: unsupervised learning, supervised learning and reinforcement learning. In this study, we apply and test the newly designed learning paradigm to tackle the proof of concept in perception and decision making for autonomous driving, in various scenarios ranging from autonomous racing cars to road user behaviour understanding. 

Although artificial intelligence (AI) has improved remarkably over the last few years, its inability to deal with fundamental uncertainty severely limits its application. The Epistemic-AI proposal re-imagines AI with a proper treatment of the uncertainty stemming from our forcibly partial knowledge of the world. As currently practised, AI cannot confidently make predictions robust enough to stand the test of data generated by processes different (even by tiny details, as shown by ‘adversarial’ results able to fool deep neural networks) from those studied at training time. While recognising this issue under different names (e.g. ‘overfitting’), traditional machine learning seems unable to address it in non-incremental ways. As a result, AI systems suffer from brittle behaviour and find it difficult to operate in new situations, e.g. adapting to driving in heavy rain or to other road users’ different styles of driving, e.g. deriving from cultural traits. Epistemic AI’s overall objective is to create a new paradigm for next-generation artificial intelligence providing worst-case guarantees on its predictions thanks to proper modelling of real-world uncertainties. For more information about the project, please visit https://www.epistemic-ai.eu/.

Invited Speakers

Gert De Cooman

ProfessorGhent University, Belgium

Marco Zaffalon

Scientific DirectorIDSIA, Switzerland

Yarin Gal

Associate ProfessorOxford University, UK

Aaditya Ramdas

Assistant ProfessorCarnegie Mellon University

Organizing Committee

Fabio Cuzzolin

ProfessorOxford Brookes University, UK

Associate ProfessorTU Delft, Netherlands

Senior Research FellowKU Leuven, Belgium

Maryam Sultana

Research FellowOxford Brookes University, UK 

PhD StudentKU Leuven, Belgium

Shireen Kudukkil Manchingal

PhD StudentOxford Brookes University, UK