About
Reinforcement learning (RL) is traditionally studied under the assumption of stationarity. Yet, real-world systems operate in open-ended, evolving environments. This motivates continual reinforcement learning (CRL), where agents must learn, adapt, and retain knowledge under persistent non-stationarity.
The full-day CRL workshop aims to consolidate perspectives across CRL and related areas, clarify foundational formulations, advance principled approaches, and discuss emerging applications. See our Call for Papers - topic of interest for details.
We are also looking for reviewers. If you are interested in reviewing papers, feel free to nominate yourself through this form!
Important Dates
Submission Deadline (Tentative): 22 May 2026 (AoE)
Acceptance Notification: 5 June 2026 (AoE)
Camera-ready Deadline: 17 June 2026 (AoE)
Workshop Day: 15 August 2026, 9:00-17:30
University of Alberta / Amii
Carnegie Mellon University
The University of Texas at Austin / Sony AI
Brown University
DeepFlow
University of Alberta / Amii
Polytechnique Montréal / Mila
Google DeepMind / University of Edinburgh
Rice University
Organizers
McGill University
Polytechnique Montréal
Université de Montréal
McGill University
Google DeepMind
University of Alberta / Amii
McGill University / Mila / Google DeepMind
Acknowledgement
We would like to acknowledge and appreciate individuals as follows for their insightful discussions and generous assistance with the organization.
Yaqi Xie, Carnegie Mellon University
Shihan Wang, Utrecht University
Zihan Wang, McGill University
Hadi Nekoei, Université de Montréal
Wanru Zhao, University of Cambridge