Metacognition---the ability to think about one's own thinking---has long been recognized as a cornerstone of human intelligence. Decades of research in cognitive science and neuroscience have investigated human metacognitive abilities, such as uncertainty representation, decision evaluation, confidence communication, or change of mind. Since these capacities enable humans to make more reliable decisions, it is natural to ask whether analogous capabilities are present in generative AI systems: Do they know what they know?
This workshop aims to address challenges in assessing and expressing uncertainty in generative AI systems. We bring together cognitive and neuroscience researchers working on human metacognition, and ML researchers working on uncertainty quantification in generative AI, to encourage interdisciplinary discussion that aims to:
Clarify what it means for an LLM to "know what it knows''
Explore how paradigms from human confidence studies can inspire new modeling, evaluation, and interface designs for trustworthy, uncertainty-aware language models
Determine which benchmark tasks, evaluation metrics and methods respect the sequential, multimodal nature of LLM inputs and outputs, and clarify how they should differ from traditional uncertainty quantification methods or metrics.
Sapient Intelligence
MPI for Intelligent Systems
University of Oxford
Metacognition in Humans and Generative AI
Our panel brings together cognitive neuroscientists who have long studied cognitive mechanisms of human confidence judgment, and computer scientists who are experts on uncertainty quantification in generative AI. We believe that these diverse perspectives brings a highly engaging panel discussion and yields valuable insights.
Helmholtz Munich
University of Pennsylvania
Max Planck Institute for Software Systems