From Underspecification to Alignment:
Breaking the One-Model Mindset for Reliable AI
AAAI 2026, Singapore
AAAI 2026, Singapore
In high-stakes domains like finance and healthcare, we often deploy a single machine learning model and trust it as the “best” possible solution. This tutorial challenges that assumption by exploring the Rashomon Effect -- the idea that many different models can achieve equally high performance on the same dataset. Far from being a mere theoretical curiosity, the Rashomon Effect represents a fundamental, yet often overlooked, paradigm shift in how we should build, evaluate, and trust AI systems. By revealing model multiplicity, the Rashomon Effect enables alignment: the ability to select among high-performing models those that best match user or societal preferences.
This quarter-day tutorial will provide a comprehensive introduction to the Rashomon Effect in machine learning, moving from its foundations to practical implications.
The tutorial is organized into four parts:
Part 1: Introduction to the Rashomon Effect and its impact on research and real-world decision-making
Part 2: Recent algorithms for constructing Rashomon sets
Part 3: How the Rashomon Effect enables alignment
Part 4: Open problems and future directions
By merging theory, algorithms, real-world examples, and interactive exploration, this tutorial equips participants with new insights and tools to recognize, analyze, and harness the Rashomon Effect, ultimately enabling the development of more robust, interpretable, and trustworthy machine learning systems.
Part 1: Introduction to the Rashomon Effect and Its Properties (30 mins)
What the Rashomon Effect is and how it changes data science pipelines and approaches to machine learning
Examples of its consequences in machine learning research and real-world applications
Causes of the Rashomon Effect and why it is inevitable
Part 2: Constructing and Exploring the Rashomon Set (30 mins)
Algorithms for constructing Rashomon sets across model classes, such as decision trees, generalized additive models, and neural networks
Hands-on activity using TreeFARMS and TimberTrek to obtain interpretable and accurate trees
Break (5 mins)
Part 3: From Model Multiplicity to Alignment (30 mins)
How the Rashomon Effect leads to prediction multiplicity and its impact on applications
Trustworthy AI: interpretability, fairness, and robustness
Implications for regulation and policy
Part 4: Future of the Field and Open Problems (10 mins)
The Rashomon Effect in complex model classes, including large language models
Chudi Zhong is an Assistant Professor in the School of Data Science and Society and Department of Statistics and Operations Research at UNC-Chapel Hill. Her research focuses on interpretable machine learning, the Rashomon set and model diversity, and human–model interaction. She has published in leading venues including NeurIPS, ICML, AAAI, AISTATS, IEEE VIS, and Statistics Surveys. She was awarded second place in the 2023 Bell Labs Prize and was selected as a 2023 Rising Star in Data Science.
Lesia Semenova is an Assistant Professor in the Department of Computer Science at Rutgers University New Brunswick. Prior to that, she was a postdoc at Microsoft Research. She was selected as a Rising Star in Computational and Data Sciences and received an Outstanding Thesis Award from Duke University. Her research interests span responsible and trustworthy AI, interpretability, human-centered design, reinforcement learning, and reasoning. She is especially interested in developing tools and pipelines that facilitate informed decision-making in high-stakes domains, as well as in theoretically explaining phenomena that we often observe in practice.