EIML@EurIPS 2025 marked the beginning of a growing community on Epistemic Intelligence in Machine Learning. The conversation continues.
Why are we doing this?
Machine learning has transformed computation from executing well-defined tasks to learning from data and generalising to unseen scenarios, becoming a foundational technology that drives scientific discovery, enables new forms of creative expression, and reshapes society. Yet, as these systems are deployed in increasingly open-ended and high-stakes environments, fundamental limitations emerge: distribution shift, adversarial manipulation, lack of robustness, catastrophic forgetting, hallucinations, safety risks, and misalignment all reflect the difficulty of operating reliably under incomplete information and strategic behaviour. This has sparked growing interest in whether---and to what extent---a higher form of intelligence, often termed epistemic intelligence (EI), can be embedded into learning machines. Rather than a single capability, EI captures a constellation of behaviours associated with higher cognitive functions, including awareness of ignorance, introspection, epistemic understanding, accountability, creativity, curiosity, anticipation, knowledge discovery, and the capacity for self-improvement.
Specifically, the ability to recognise and reason about the limits of one's own knowledge (epistemic uncertainty or ignorance) has long been regarded as a hallmark of both human and machine intelligence. This capability enables models not only to make accurate predictions, but also to recognise when prediction is unwarranted. Such competence is becoming increasingly critical across several areas of machine learning, including distribution shift, AI safety, AI alignment, and continual learning, where systems must explicitly reason about unknown unknowns, such as deployment-time data distributions, adversarial behaviour, human preferences, and incomplete or evolving knowledge.
To provide a unifying perspective on how these unknown unknowns are addressed, this workshop aims to bring together researchers from diverse subfields of machine learning whose work confronts epistemic uncertainty from complementary angles. The workshop places particular emphasis on real-world impact, with a focus on robustness, AI safety, AI alignment, and generative AI. To this end, topics to be covered include, but are not limited to:
Foundations of Uncertainty modelling
Uncertainty-aware Generative AI and Foundation models
AI Safety as an Epistemic Problem
AI Alignment under Objective Uncertainty
Lifelong and Continual Learning in Open Worlds
20 March 2026
20 April 2026
27 April - 27 May 2026
04 June 2026
15 June 2026
10 July 2026
Submission opens
Paper submission deadline
Review
Author notification
Camera-ready submission deadline
Main workshop
For more information regarding submission, registration, and workshop schedule, please refer to Calls, Registration, and Schedule, respectively.
This workshop seeks contributions from researchers across machine learning, statistics, philosophy of science, decision theory, and related disciplines to explore theoretical foundations, algorithmic innovations, and practical applications that center around the idea of unknown unknown. We welcome works-in-progress and mature research that address the central challenge of reasoning and decision-making under epistemic uncertainty.
Topics of interest include, but are not limited to:
Foundations of Uncertainty
Generalisations of probability theory, including imprecise probability and higher-order probabilistic models
Formal distinctions between randomness, ambiguity, and ignorance in AI systems
Coherence, rationality, and consistency principles for learning and inference under incomplete information
Implications for generalisation, model comparison, and robustness under distributional uncertainty
Statistical and decision-theoretic foundations of epistemic uncertainty
Uncertainty-aware Generative AI and Foundation Models
Epistemic uncertainty and ignorance in generative models;
hallucination as an epistemic failure and strategies for its mitigation;
uncertainty-aware decoding, prompting, and inference;
and uncertainty-aware reward modelling and alignment.
AI Safety as an Epistemic Problem
Reframing AI safety from robustness against known failures to reasoning under unknown unknowns;
safety violations arising from overconfident extrapolation beyond the support of the data;
formal mechanisms for identifying epistemic blind spots, enabling abstention, and supporting safe fallback behaviour;
and principled criteria governing when learning systems should refuse to act.
AI Alignment under Objective Uncertainty
Alignment when objectives are incomplete, evolving, or strategically manipulated;
explicit modelling of value uncertainty rather than fixed reward optimisation;
limits of preference learning and reward modelling under partial observability;
alignment failures as epistemic mismatches between system beliefs, incentives, and social objectives
Lifelong and Continual Learning in Open World
Learning as long-term belief revision rather than repeated retraining;
epistemic challenges posed by non-stationarity, novelty, and concept emergence;
catastrophic forgetting as a failure of coherent uncertainty propagation;
principled update rules for accumulating knowledge without collapsing uncertainty prematurely
We encourage both theoretical contributions and applied case studies. Submissions that challenge prevailing assumptions, propose novel benchmarks, or provide insights into the philosophical and foundational dimensions of uncertainty in AI are especially welcome.
Jeremie Houssineau
Assistant ProfessorBelinda Zou Li
PhD Candidate Massachusetts Institute of Technology, USExpertise: Trustworthy Language ModelsClaire Vernade
ProfessorUniversity of Technology Nuremberg, GermanyExpertise: Lifelong Reinforcement LearningSahar Abdelnabi
ELLIS Institute Tübingen & MPI-IS, GermanyMichele Caprio
University of Manchester,Siu Lun Chau
Nanyang Technological University,Arnaud Doucet
Google DeepmindShireen Kudukkil Manchingal
Oxford Brookes University,Krikamol Muandet
CISPA Helmholtz Center for Information Security,Sanoufar Abdul Azeez (IIITM Kerala)
Amitesh Badkul (CUNY)
James Bailie (Chalmers)
Yasir Zubayr Barlas (Manchester)
C. Battiloro (Harvard)
Rabanus Derr (Tübingen)
Johanna Einsiedler (Copenhagen)
Feyza Eksen (Rostock)
Adam Faza (KU Leuven)
Javier Fumanal-Idocin (Essex)
Xabier Gonzalez-Garcia (Navarra)
Nicholas Hadjisavvas (UCL)
Paul Hofman (LMU)
Benedikt Höltgen (HPI)
Jeremie Houssineau (NTU Singapore)
Ismail Huseynov (PTB Germany)
Alireza Javanmardi (LMU)
Mira Juergens (Ghent)
Carlo Kneissl (LMU)
Kody J. H. Law (Manchester)
Matthijs van der Lende (Groningen)
Valentin Margraf (LMU)
Giorgio Morales (Caen Normandie)
Tanmoy Mukherjee (VUB Brussels)
Ayush Pandey (TCS India)
Ramon Daniel Regueiro-Espino (Sorbonne)
Julien Rodemann (LMU)
Sören Schleibaum (TU Clausthal)
Annika Schneider (Helmholtz Munich)
N. J. Schutte (TU Delft)
Vaisakh Shaj (Edinburgh)
Anurag Singh (CISPA)
Sabina Sloman (Manchester)
Gokul Srinath Seetha Ram (PareIT)
Matteo Tolloso (Pisa)
R. Verma (Amsterdam)
Susanna Di Vita (ETH Zurich)
Fanyi Wu (Manchester)
Buelte (LMU)
Yusuf Sale (LMU)
Maryam Sultana (Oxford Brookes)