Our workshop investigates whether generative AI models exhibit metacognitive capabilities, similar to those of humans: Do they know what they know? Are they able to judge and express their confidence? Can they explain their reasoning? Such metacognitive abilities can improve model performance, reduce hallucination rate, trigger a change of mind when presented with extra evidence, and are prerequisites for safe deployment and informed decision-making. Our workshop will bring together researchers from machine learning, cognitive science, and neuroscience to foster interdisciplinary discussion on this topic.
We invite submissions on, but not limited to, the following topics:
Interdisciplinary approaches connecting cognitive science, neuroscience, and machine learning to study, model, and evaluate metacognition in humans and generative AI
Theoretical foundations for understanding and modeling metacognitive capabilities of generative AI systems.
Effective and scalable methods, benchmarks, and metrics for uncertainty representation and quantification in generative models.
Effective uncertainty communication techniques in generative AI systems and their effect on user trust, informed decision making, and human-AI collaboration.
Applications of uncertainty quantification to detect and reduce hallucination, support safe deployment, and enhance model performance.
Adapting classic uncertainty quantification methods (conformal prediction, (multi)calibration) to the sequential nature of LLM outputs.
Challenges in uncertainty estimation for LLMs, including input ambiguity, semantic equivalence, multimodality, etc, and possible solutions.
We welcome submissions of original and unpublished work, as well as those under review or previously published elsewhere. Our workshop is non-archival. Please follow the following guidelines:
Page limit: The main content of the paper should be 4-8 pages (excluding references and supplementary material).
Style: Submissions must be formatted using the NeurIPS 2025 LaTeX style file.
Double-blind reviewing: Our reviewing process is double-blind, and submissions must be fully anonymized.
Physical poster presentation: At least one author of each accepted paper should be present at the workshop to present a poster.
Oral presentation: Our program committee will select two best papers for oral presentation at the workshop.
Submission: Please send your submissions via OpenReview. All authors are required to have an up-to-date OpenReview profile. Please note that new profiles created without an institutional email will go through a moderation process that can take up to two weeks, while new profiles created with an institutional email will be activated automatically.
Submission deadline: October 17, 2025, AoE
Notification of acceptance: October 31, 2025, AoE
Camera-ready deadline: November 21, 2025, AoE
Workshop date: December 6 or 7, 2025, AoE