Modern Artificial Intelligence (AI) systems—ranging from predictive models to generative and large language models—have demonstrated remarkable capabilities, advancing science, healthcare, and everyday life. Powered by probability theory, these models excel at capturing statistical variability in data, enabling impressive performance in tasks such as prediction and generation. However, as our reliance on AI deepens, their fundamental limitations are becoming increasingly evident—particularly in reasoning under uncertainty that extends beyond data-driven variability, including ignorance, ambiguity, and distributional shifts. This highlights a lack of epistemic intelligence—the capacity to recognise and communicate what the system does not know, or in other words, to quantify, express, and act reasonably upon its epistemic uncertainty (EU). Overcoming this limitation calls for a paradigm shift beyond classical probability, toward frameworks that explicitly accommodate ambiguity, imprecision, and indecision in the modelling process. Faithful modelling of EU not only enhances transparency and fosters trust in AI systems, but also plays a crucial role in risk-sensitive decision-making. By explicitly acknowledging uncertainty, models can better inform downstream decisions, avoid overconfident predictions, and mitigate potential harms—particularly in high-stakes or safety-critical applications.
This workshop will serve as an interface for experts from various machine learning communities that also centre around EU to meet and exchange ideas, and for participants to present their works-in-progress that resonate with the workshop's central themes. In particular, it aims to shed light on the following questions:
What are the fundamental limitations of classical probability theory in representing imprecision and ambiguity?
What are the appropriate mathematical foundations for representing EU?
How can we design learning algorithms that operate under EU---arising from distribution shifts, imprecise modelling assumptions, or uncertainty about the learning targets themselves?
How should we benchmark and evaluate uncertainty-centric machine learning algorithms?
What constitutes a ``good'' uncertainty quantification method?
How can machine learning help overcome the computational challenges of traditional principled uncertainty models, which often scale exponentially with the size of the state space?
In which machine learning applications does the EU play a central and indispensable role?
How can principled models of EU be effectively integrated into large-scale modern architectures, such as large language models and diffusion models?
--------------------------
will be our panel discussion topic. The panel will reflect on the foundational question of how epistemic uncertainty should be represented in intelligent systems. While alternative frameworks for modelling uncertainty exist, their role in modern machine learning remains open and actively debated. The discussion will draw on theoretical, practical, and epistemological perspectives to illuminate the current landscape and future directions.
10 October 2025
17 October 2025
31 October 2025
10 November 2025
6 December 2025
Submission opens
Paper submission deadline
Author Notification
Camera-ready submission deadline
Main workshop
For more information regarding submission, registration, and workshop schedule, please refer to Calls, Registration, and Schedule, respectively.
This workshop seeks contributions from researchers across machine learning, statistics, philosophy of science, decision theory, and related disciplines to explore theoretical foundations, algorithmic innovations, and practical applications that center around epistemic uncertainty (EU). We welcome works-in-progress and mature research that address the central challenge of reasoning and decision-making under epistemic uncertainty.
Topics of interest include, but are not limited to:
Representing and Measuring Epistemic Uncertainty
Mathematical frameworks for EU: imprecise probability, fuzzy logic, belief functions, possibility theory, etc.
Comparisons and formal properties of uncertainty representations
Evaluation criteria and benchmarking strategies for uncertainty quantification methods
Epistemic vs aleatoric uncertainty: delineation and interaction
Prediction Under Epistemic Uncertainty
Predictive models that capture and express EU: Bayesian models, evidential deep learning, credal models
Generalisation under distribution shifts, domain adaptation, and robustness analysis
OOD detection and safe prediction under model misspecification
Learning under partial or vague supervision
Decision-Making and Learning Under Epistemic Uncertainty
Risk-sensitive and ambiguity-aware decision-making frameworks
Uncertainty quantification in generative models
Active learning, Bayesian experimental design, and uncertainty-aware optimisation
EU in reinforcement learning, continual learning, and online learning setting
Integration of principled uncertainty models into large-scale architectures (e.g., transformers, diffusion models)
Scalable algorithms for traditionally intractable uncertainty models
We encourage both theoretical contributions and applied case studies. Submissions that challenge prevailing assumptions, propose novel benchmarks, or provide insights into the philosophical and foundational dimensions of uncertainty in AI are especially welcome.
Michele Caprio
University of Manchester,Siu Lun Chau
Nanyang Technological University,Ruobin Gong
Rutgers University, United StatesShireen Kudukkil Manchingal
Oxford Brookes University,Krikamol Muandet
CISPA Helmholtz Center for Information Security,Bob Williamson
University of Tübingen,Rejev Verma
PhD, University of AmsterdamAlessandro Zito
Postdoctoral Researcher, Harvard UniversityClaudio Battiloro
Postdoctoral Researcher, Harvard UniversityChristopher Bülte
PhD, LMU Munich, GermanyRabanus Derr
PhD, University of TübingenSoroush H. Zargarbashi
CISPA, GermanySabina Sloman
Postdoctoral Researcher, University of ManchesterPaul Hofman
PhD, LMU Munich, GermanyJérémie Houssineau
Asst. Professor, Nanyang Technological UniversityNong Minh Hieu
PhD, SMU SingaporeMaryam Sultana
Research Fellow, Oxford Brookes UniversityMira Jürgens
PhD, University of GhentJack Liell-Cock
PhD, Oxford UniversityKaizheng Wang
PhD, KU LeuvenJulian Rodeman
PhD, LMU Munich, GermanyYusuf Sale
PhD, LMU Munich, GermanyFanyi Wu
PhD, University of ManchesterAnurag Singh
PhD, CISPA Helmholtz Center for Information Security