Epistemic Uncertainty in Artificial Intelligence
| E-pi UAI 2023 |
A New Paradigm on Artificial Intelligence using Epistemic Uncertainty - E-pi UAI 2023 Workshop
Friday, August 4, Pittsburgh, USA
News Updates
--> 25/04/2024 Springer Proceedings Published @LNAI https://link.springer.com/book/10.1007/978-3-031-57963-9
--> 15/12/2023 Springer Proceedings In process @LNAI www.springer.com/gp/computer-science/lncs/forthcoming-proceedings
--> 24/07/2023 The Deadline to Submit Video Presentations is July 27th, 2023
--> 21/07/2023 Presentation Instructions for Authors Presenting Online [link]
--> 20/07/2023 The Detailed Program Has Been Announced [link]
--> 15/07/2023 List of Accepted Papers [link]
--> 13/07/2023 Oral and Poster Paper Presentations instructions [link]
--> 13/07/2023 Workshop Schedule will be announced on 19/07/2023
--> 11/07/2023 Oral and Poster Paper Announcements will be notified to authors after camera-ready submission
--> 06/07/2023 Workshop Venue: University Center @Carnegie Mellon University, Pittsburgh, PA, USA
--> 09/06/2023 Papers Submission Deadline Extended
--> 21/04/2023 Papers Submission Link is active
--> 2/3/2023 Workshop Homepage is open
Important Dates
Source Files Submission for Springer Proceedings: January 15, 2024
Video Presentation Submission: July 27, 2023
Paper Camera Ready Submission: July 14, 2023 [Camera Ready Instructions]
Paper Acceptance Notification: July 5, 2023 [Announced @CMT]
Paper Submission Deadline: June 10, 2023, 23:59 anywhere on earth (AoE) June 25, 2023, 23:59 anywhere on earth (AoE) [Paper Submission Link]
Paper Acceptance Notification: July 4, 2023 , July 5, 2023
E-pi UAI 2023 Workshop date: August 4 (Friday), 2023
For more information regarding paper submission, registration, and workshop schedule, please refer to Calls, Registration, and Schedule, respectively.
Call for Papers
The E-pi UAI 2023 aims to raise awareness around the modelling of Epistemic Uncertainty in AI/ML, a rapidly emerging topic in the AI community represented at UAI. As we aim for the broadest possible involvement, we plan to invite the submission of peer-reviewed papers covering the following topics:
Model uncertainty estimation,
Robustness to distribution shift,
Out-of-distribution generalisation,
Model adaptation,
Datasets and protocols for evaluating uncertainty and robustness,
Conformal prediction,
Distribution-free uncertainty quantification,
Optimisation under uncertainty,
Uncertainty estimation using deep ensembles,
Bayesian deep learning (approximate inference, Bayesian deep RL),
Deep recognition models for variational inference,
Epistemic learning,
Uncertainty in real-world applications (e.g. autonomous driving, healthcare, language models).
The list is not exhaustive as we wish to mobilize the whole community around this theme.
We encourage you to freely submit your papers to other venues of your choice, including any preliminary work you may have. Our primary aim is to showcase advancements in the field of uncertainty in machine learning. We eagerly seek to highlight novel contributions and cutting-edge research in this area.
We will award prizes for the best paper and the best student paper, and authors submitting original work will be invited to consider publication in a proceedings volume, subject to attracting a sufficient number of high-quality papers. The submission is non-archival, we will seek the consent of the authors before the proceedings.
Abstract
Machine learning models have shown their performance in safety-critical applications like autonomous driving and healthcare. However, in the presence of uncertainty, model predictions might be suboptimal or even incorrect. This highlights the need for learning systems to be equipped with the means to determine a model’s confidence in its prediction. Hence, calibrated uncertainty estimation and modelling techniques are the key factors in the predictive process where the challenge is ‘to know when a model does not know’. Modelling epistemic uncertainty, the kind of uncertainty that arises from the lack of data (information or knowledge) in terms of both quantity and quality, is a critical step for any machine-learning model. Epistemic AI’s principle is that AI should first and foremost learn from the data it cannot see. This translates into learning sets of hypotheses compatible with the available data. With the E-pi UAI 2023 workshop, we seek to attract the machine learning research community to contribute novel solutions towards modelling epistemic uncertainty in AI, including probabilistic and non-probabilistic techniques.
Since the machine learning topic has been recently the object of much attention in many domains and communities, we have started to break entirely with the current state of artificial intelligence and with the most exciting ongoing efforts, such as continual learning (making the learning process a life-long endeavour), multi-task learning (aiming to distil knowledge from multiple tasks to solve a different problem) or meta-learning (learning to learn). It has been mainly approached from the point of view of preventing the model's low accuracy and resulting in non-robust outcomes in the light of choosing the best uncertainty models. As these are all still firmly rooted in AI’s conventional principles, they fail to recognise the foundational issue that the discipline has with the representation of uncertain knowledge. Epistemic AI (E-pi) goes beyond ‘human-centric’ AI, the push to make artificial constructs more trustable by human beings and more capable of understanding humans since it strives to model the uncertainty stemming not just from human behaviour, but from all sources of uncertainty present in complex environments. After a model has been trained via the data, the model result deviates from the original one by just having new data. The unspoken assumption there is that we are quite satisfied with the model (accuracy) and results we have, whilst we wish to extend its capabilities to new settings and data.
What is Epistemic AI?
Epistemic AI aims to create a new paradigm for next-generation artificial intelligence providing worst-case guarantees on its predictions, thanks to proper modelling of real-world uncertainties. This involves formulating a new mathematical framework for optimisation under epistemic uncertainty superseding existing probabilistic approaches. The latter will lay the premises for the creation of new ‘epistemic’ learning paradigms able to revisit the foundations of artificial intelligence. In Epistemic AI we will focus, in particular, on some of the most important areas of machine learning: unsupervised learning, supervised learning and reinforcement learning. In this study, we apply and test the newly designed learning paradigm to tackle the proof of concept in perception and decision making for autonomous driving, in various scenarios ranging from autonomous racing cars to road user behaviour understanding.
Although artificial intelligence (AI) has improved remarkably over the last few years, its inability to deal with fundamental uncertainty severely limits its application. The Epistemic-AI proposal re-imagines AI with a proper treatment of the uncertainty stemming from our forcibly partial knowledge of the world. As currently practised, AI cannot confidently make predictions robust enough to stand the test of data generated by processes different (even by tiny details, as shown by ‘adversarial’ results able to fool deep neural networks) from those studied at training time. While recognising this issue under different names (e.g. ‘overfitting’), traditional machine learning seems unable to address it in non-incremental ways. As a result, AI systems suffer from brittle behaviour and find it difficult to operate in new situations, e.g. adapting to driving in heavy rain or to other road users’ different styles of driving, e.g. deriving from cultural traits. Epistemic AI’s overall objective is to create a new paradigm for next-generation artificial intelligence providing worst-case guarantees on its predictions thanks to proper modelling of real-world uncertainties. For more information about the project, please visit https://www.epistemic-ai.eu/.