Reinforcement Learning (RL) and Causal Inference have developed as largely independent fields, yet they share a fundamental connection: both aim to model and reason about how actions influence outcomes in uncertain environments. Despite their intertwined nature, the interaction between these two disciplines has been limited, leaving many fundamental questions unresolved.
Recently, there has been growing interest in bridging these fields to extend the decision-making paradigm beyond the standard (PO)MDPs to accommodate potential confounding biases in real-world environments and enhance RL algorithms' generalization, robustness, and sample efficiency. Causal concepts have the potential to improve RL in various ways: enabling better credit assignment, guiding exploration, improving transportability across tasks, and facilitating better explanations. Conversely, RL offers an interactive framework for causal decision-making, realizing the concept of interventions in real-world complex environments.
The CausalRL Workshop aims to bring together researchers at the intersection of RL and Causality to explore new opportunities, challenges, and recent advances. Through invited talks, contributed presentations, and discussions, we seek to foster collaboration and define key research directions that could shape the future of causal reinforcement learning. By doing so, the CausalRL Workshop will highlight cutting-edge research and practical applications, addressing questions including, but not limited to:
Causal Offline (to Online) RL: Causal models can help RL agents better handle distribution shifts, unobserved confounders, and covariate shifts in offline datasets; and ultimately transfer the knowledge from confounded offline datasets to online environments.
Causal World Modeling: Causal discovery can help learn more robust and explainable world models, enabling better long-term prediction and planning with counterfactual reasoning.
Causal Discovery in Interactive Environments: RL agents can leverage active interventions to uncover causal relationships and improve decision-making.
Causal Representation Learning for RL: Learning causally-structured latent spaces from high-dimensional observations can enhance sample efficiency, generalization, and transferability in downstream decision-making tasks.
Causality for Robust and Safe RL: Causal reasoning can help understand both the true cause of desired outcomes and failure modes to improve robustness against spurious correlations, adversarial attacks, and domain shifts concerning safety constraints.
30/07: The updated speaker lineup is now live!
09/04: Our website is now live. Talk titles will be announced soon!
21/04: The Call for Papers has been published!
Call for Papers: link
Workshop Paper Submission Deadline: 30 May 2025 15 Jun 2025 (AOE)
Accept/Reject Notification Date: 20 June 2025
Camera Ready Submission: 15 July 2025
Workshop Day: 5 August 2025
Davide Corsi: dcorsi@uci.edu
Annie Raichev: araichev@uci.edu