📣 Paper Submission is now open, submit your work here: link
📣Submissions are non-archival and may include exploratory or in-progress research, extensions of prior publications, as well as novel and unpublished work.
📣Thanks to RBC Borealis, the workshop will offer 10 travel grants, 1 Best Paper Award, and 3 Best Poster Awards
📣 We invite researchers and practitioners to self-nominate as reviewers for the workshop. If you are interested, please complete the reviewer self-nomination form: link
This workshop addresses methods for ensuring the reliability of machine learning systems under distribution drift. Such drift may occur as temporal, label, concept, domain, or representation shift, each presenting unique challenges for monitoring and adaptation. We organize the program into three complementary themes:
Sensing the Drift: Developing tools to detect when models encounter distributional change, using statistical tests, kernel methods, uncertainty estimation, and representation-based monitoring. This theme focuses on identifying drift early and minimizing false alarms in real-world pipelines.
Responding to Drift: Designing strategies that adapt models once drift is detected, including test-time adaptation, continual and online learning, regularization, and selective prediction. The focus is on maintaining accuracy and stability while avoiding catastrophic forgetting.
Operating at Scale: Extending monitoring and adaptation to large-scale production environments, where heterogeneous data streams, governance requirements, and real-time costs amplify the challenge. This theme emphasizes system design, infrastructure, benchmarks, and protocols for reliable deployment at scale.
Together, these themes establish a unified research agenda for developing robust and trustworthy machine learning systems in dynamic, non-stationary environments.
We invite contributions spanning monitoring, adaptation, fairness, and governance of ML systems under distributional drift, including (but not limited to):
Drift detection and characterization: Statistical tests, kernel methods, density-ratio estimation, causal or graph-based change detection, sequential monitoring, and early-warning systems for drift identification.
Representation and uncertainty: Calibration, uncertainty estimation, explainability, and representation-based monitoring for identifying concept, covariate, or representation drift.
Adaptation and recovery: Test-time adaptation, online learning, continual learning, domain generalization, and meta-learning approaches for handling evolving data distributions.
Fairness and bias under drift: Methods for maintaining fairness, subgroup robustness, and demographic parity as data or population shifts occur; detection and mitigation of emerging biases during deployment.
Governance and auditing: Frameworks for model monitoring under regulatory and compliance constraints; accountability, audit trails, and explainable intervention mechanisms in high-stakes domains.
Reliability at scale: Systems and infrastructure for large-scale monitoring, data logging, alerting, and automated response in production pipelines.:
Submissions are non-archival and may include under-review work, exploratory or in-progress research, extensions of prior publications, as well as novel and unpublished contributions.
Important dates:
Paper submission opens: January 1, 2026
Paper submission deadline: Feb 10, 2026
Review deadline for reviewers: February 25, 2026
Author notification: March 1, 2026
Camera-ready deadline for accepted papers: March 16, 2026
Workshop day: April 26 or 27, 2026
Submissions must be made through OpenReview and formatted using the ICLR conference proceedings style.
We invite submissions to two tracks:
Papers submitted to the main track may be up to 8 pages, excluding references.
Tiny papers are intended for concise contributions and must be no longer than 4 pages in total, excluding references.
All submissions will undergo a double-blind, non-archival review process.
All accepted papers will be presented in an extended poster session at the workshop. In addition, a small number of papers will be selected for spotlight oral presentations.
Beyond the main track, we will host a dedicated track on Lessons from Failures: Understanding What Did Not Work and Why.
Submissions to this track are limited to a maximum of 4 pages, excluding references, and should follow the ICLR formatting guidelines.
This track welcomes submissions that report:
Methods that fell short of expectations
Failed or inconclusive experiments
Unexpected challenges or negative results
By encouraging openness about what did not work, this track aims to reduce duplicated effort, strengthen scientific rigor, and accelerate progress across the community.
We are pleased to introduce a Best Paper Award and 3 Best poster awards. to recognize exceptional submissions. Award recipients will receive:
A certificate of recognition
A monetary prize, generously sponsored by RBC Borealis.
Selections will be made based on recommendations from the program committee.
To promote inclusivity and broaden participation, we are offering a number of travel grants to support attendees from diverse backgrounds. This initiative is made possible through the generous support of RBC Borealis and aims to foster richer and more inclusive discussions at the workshop.
Stanford University
Caltech
Seoul National University
Google Deepmind
RBC Borealis
University of British Columbia
University of New South Wales
Ticiana L. Coelho da Silva
Brazilian Office of the Comptroller General
RBC Borealis
University of Illinois Urbana-Champaign
Amazon
AIST
York University
Singapore-MIT Alliance for Research and Technology
Shanghai Jiao Tong University
Qualcomm AI Research