This workshop seeks contributions from researchers across machine learning, statistics, philosophy of science, decision theory, and related disciplines to explore theoretical foundations, algorithmic innovations, and practical applications that center around the idea of unknown unknown. We welcome works-in-progress and mature research that address the central challenge of reasoning and decision-making under epistemic uncertainty.
Topics of interest include, but are not limited to:
Foundations of Uncertainty
Uncertainty-aware Generative AI and Foundation Models
Epistemic uncertainty and ignorance in generative models;
hallucination as an epistemic failure and strategies for its mitigation;
uncertainty-aware decoding, prompting, and inference;
and uncertainty-aware reward modelling and alignment.
AI Safety as an Epistemic Problem
Reframing AI safety from robustness against known failures to reasoning under unknown unknowns;
safety violations arising from overconfident extrapolation beyond the support of the data;
formal mechanisms for identifying epistemic blind spots, enabling abstention, and supporting safe fallback behaviour;
and principled criteria governing when learning systems should refuse to act.
AI Alignment under Objective Uncertainty
Alignment when objectives are incomplete, evolving, or strategically manipulated;
explicit modelling of value uncertainty rather than fixed reward optimisation;
limits of preference learning and reward modelling under partial observability;
alignment failures as epistemic mismatches between system beliefs, incentives, and social objectives
Lifelong and Continual Learning in Open World
Learning as long-term belief revision rather than repeated retraining;
epistemic challenges posed by non-stationarity, novelty, and concept emergence;
catastrophic forgetting as a failure of coherent uncertainty propagation;
principled update rules for accumulating knowledge without collapsing uncertainty prematurely
We encourage both theoretical contributions and applied case studies. Submissions that challenge prevailing assumptions, propose novel benchmarks, or provide insights into the philosophical and foundational dimensions of uncertainty in AI are especially welcome.
Submissions should present novel, unpublished work. Work that previously appeared in non-archival venues (such as arXiv or other workshops without proceedings) is allowed. All submitting authors are required to have an OpenReview profile: please ensure that these are up-to-date before submitting.
Submission Instructions
To submit your paper, please consider the following instructions and guidelines:
All contributions should be made via OpenReview.
We welcome submissions of original, unpublished material, as well as work that is currently under review (i.e. has been submitted but not yet accepted elsewhere). Note that new OpenReview profiles created without an institutional email will go through a moderation process that can take up to two weeks while those created with an institutional email will be activated automatically.
Page limit: Papers should be up to 4-6 pages, excluding references and supplementary materials.
Template: Please use the EIML 2026 Latex style files.
Double blind reviews: authors should anonymize their submissions to ensure a double blind review process.
LLMs policy: In the preparation of your contributions, the use of LLMs is allowed only as a general-purpose writing assist tool.
Publication. EIML workshop is non-archival, and should thus generally not violate dual submission policies at other archival venues (e.g., submitting work that is currently under review at another conference is permitted); if unsure, please check with the corresponding venue.
Attending the workshop. Our workshop is primarily an in-person event, and authors are asked to present a poster at the workshop if possible. A subset of papers will be selected for presentation in short 10-minute spotlight talks.