This workshop addresses methods for ensuring the reliability of machine learning systems under distributional drift. Such drift may occur as temporal, label/concept, domain, or representation shift, each presenting unique challenges for monitoring and adaptation. We organize the program into three complementary themes:
Sensing the Drift: Developing tools to detect when models encounter distributional change, using statistical tests, kernel methods, uncertainty estimation, and representation-based monitoring. This theme focuses on identifying drift early and minimizing false alarms in real-world pipelines.
Responding to Drift: Designing strategies that adapt models once drift is detected, including test-time adaptation, continual and online learning, regularization, and selective prediction. The focus is on maintaining accuracy and stability while avoiding catastrophic forgetting.
Operating at Scale: Extending monitoring and adaptation to large-scale production environments, where heterogeneous data streams, governance requirements, and real-time costs amplify the challenge. This theme emphasizes system design, infrastructure, bench- marks, and protocols for reliable deployment at scale.
Together, these themes establish a unified research agenda for developing robust and trustworthy
ML systems in dynamic, non-stationary environments.
We invite contributions spanning monitoring, adaptation, fairness, and governance of ML systems under distributional drift, including (but not limited to):
Drift detection and characterization: Statistical tests, kernel methods, density-ratio estimation, causal or graph-based change detection, sequential monitoring, and early-warning systems for drift identification.
Representation and uncertainty: Calibration, uncertainty estimation, explainability, and representation-based monitoring for identifying concept, covariate, or representation drift.
Adaptation and recovery: Test-time adaptation, online learning, continual learning, domain generalization, and meta-learning approaches for handling evolving data distributions.
Fairness and bias under drift: Methods for maintaining fairness, subgroup robustness, and demographic parity as data or population shifts occur; detection and mitigation of emerging biases during deployment.
Governance and auditing: Frameworks for model monitoring under regulatory and compliance constraints; accountability, audit trails, and explainable intervention mechanisms in high-stakes domains.
Reliability at scale: Systems and infrastructure for large-scale monitoring, data logging, alerting, and automated response in production pipelines.
Important dates:
Paper submission opens: January 1, 2026
Paper submission deadline: January 30, 2026
Review deadline for reviewers: February 23, 2026
Author notification: March 1, 2026
Camera-ready deadline for accepted papers: March 9, 2026
Workshop day: April 26 or 27, 2026
Submissions must be made through OpenReview and formatted using the ICLR conference proceedings style.
We invite submissions to two tracks:
Papers submitted to the main track may be up to 8 pages, excluding references.
Tiny papers are intended for concise contributions and must be no longer than 5 pages in total, including references.
All submissions will undergo a double-blind, non-archival review process.
All accepted papers will be presented in an extended poster session at the workshop. In addition, a small number of papers will be selected for spotlight oral presentations.
Beyond the main track, we will host a dedicated track on Lessons from Failures: Understanding What Did Not Work and Why.
Submissions to this track are limited to a maximum of 4 pages, excluding references, and should follow the ICLR formatting guidelines.
This track welcomes submissions that report:
Methods that fell short of expectations
Failed or inconclusive experiments
Unexpected challenges or negative results
By encouraging openness about what did not work, this track aims to reduce duplicated effort, strengthen scientific rigor, and accelerate progress across the community.
We are pleased to introduce a Best Paper Award and 3 Best poster awards. to recognize exceptional submissions. Award recipients will receive:
A certificate of recognition
A monetary prize, generously sponsored by RBC Borealis.
Selections will be made based on recommendations from the program committee.
To promote inclusivity and broaden participation, we are offering a number of travel grants to support attendees from diverse backgrounds. This initiative is made possible through the generous support of RBC Borealis and aims to foster richer and more inclusive discussions at the workshop.
Chelsea Finn
Stanford University
Anima Anandkumar
Caltech
Taesup Moon
Seoul National University
Arthur Gretton
Google Deepmind
Masashi Sugiyama
The University of Tokyo
Rahaf Aljundi
Toyota Motors
Elahe Arani
Wayve
Fred Tung
RBC Borealis
Evan Shelhamer
University of British Columbia
Flora Salim
University of New South Wales
Ticiana L. Coelho da Silva
Brazilian Office of the Comptroller General
Sepid Hosseini
RBC Borealis
Bo Li
University of Illinois Urbana-Champaign
Murat Sensoy
Amazon
Chung-Chi Chen
AIST
Elham Dolatabadi
York University
Teresa Yeo
Singapore-MIT Alliance for Research and Technology
Dequan Wang
Shanghai Jiao Tong University
Motasem Alfarra
Qualcomm