Machine learning and AI have long been concerned about modeling how an agent can change the world around it. However, intervening in the physical world takes effort, leading to sparsity of evidence and the corresponding gaps of credibility when an agent considers carrying out previously unseen actions. Making the most of sparse data within a combinatorial explosion of possible actions, dose levels, and waiting times requires careful thinking, akin to efforts for introducing more compositionality principles into machine learning. The goal of this workshop is to bring together state-of-the-art ideas on how to predict effects of novel interventions and distribution shifts by exploiting original ways of composing evidence from multiple data-generation regimes.
While machine learning research in causal inference has traditionally addressed how to extrapolate from observational data and combinations of experiments, several recent papers have introduced novel ways to represent and integrate lessons from different perturbations, including but not limited to: breaking the representation of a treatment vector into flexible sparse [1] or low-dimensional [2] function spaces; intervention-aware energy functions [3,4]; dealing with interventions represented as graphs of entities and links [5]; establishing connections to embeddings and residual representations [6,7]; reusing causal representations from low-level data across multiple environments [8]; or representing effects via compositional modules in a deep model pipeline [9].
The workshop will provide a venue for researchers across many corners of the ICML community who are interested in novel ways for generalization from interventions and multiple environments, and for artificial intelligence researchers interested on how compositionality plays a role in causal reasoning. It will provide a forum for contributions from causal machine learning, distribution shift and domain adaptation, representation learning, reinforcement learning, bandits and Bayesian optimization, and application areas such as medical spatial treatments [2], cell biology [7,10], economics [11] , recommender systems [12] and LLM-agents implementing or recommending real-world actions.
[1] Agarwal, A., Agarwal, A., and Vijaykumar, S. Synthetic combinations: A causal inference framework for combinatorial interventions. Advances in Neural Information Processing Systems 36 (NeurIPS 2023), pp. 19195–19216, 2023a.
[2] Nabi, R., McNutt, T., and Shpitser, I. Semiparametric causal sufficient dimension reduction of multidimensional treatments. The 38th Conference on Uncertainty in Artificial Intelligence, pp. 1445–1455, 2022.
[3] Bravo-Hermsdorff, G., Watson, D., Yu, J., Zeitler, J., and Silva, R. Intervention generalization: A view from factor graph models. Advances in Neural Information Processing Systems 36 (NeurIPS 2023), pp. 43662–43675, 2023.
[4] Talon, D., Lippe, P., James, S., Bue, A. D., and Magliacane, S. Towards the reusability and compositionality of causal representations. 3th Conference on Causal Learning and Reasoning, 2024.
[5] Kaddour, J., Zhu, Y., Liu, Q., Kusner, M. J., and Silva, R. Causal effect inference for structured treatments. Advances in Neural Information Processing Systems 34 (NeurIPS 2021), pp. 24841–24854, 2021.
[6] Saito, Y., Qingyang, R., and Joachims, T. Off-policy evaluation for large action spaces via conjunct effect modeling. Proceedings of the 39th International Conference on Machine Learning (ICML 2023), pp. 29734—-29759, 2023.
[7] Gaudelet, T., Vecchio, A. D., Carrami, E., Cudini, J., Kapourani, C.-A., Uhler, C., and Edwards, L. Season combinatorial intervention predictions with salt peper. ICLR 2024 Workshop on Machine Learning for Genomics Explorations, 2024.
[8] Talon, D., Lippe, P., James, S., Bue, A. D., and Magliacane, S. Towards the reusability and compositionality of causal representations. 3th Conference on Causal Learning and Reasoning, 2024.
[9] Pruthi, P. and Jensen, D. Compositional models for estimating causal effects. 4th Conference on Causal Learning and Reasoning, 2025.
[10] Zhang, J., Greenewald, K., Squires, C., Srivastava, A., Shanmugam, K., and Uhler, C. Identifiability guarantees for causal disentanglement from soft interventions. Advances in Neural Information Processing Systems, 36:50254–50292, 2023a.
[11] Higbee, S. Policy learning with new treatments. arXiv:2210.04703, 2023.
[12] Li, H., Wu, K., Zheng, C., Xiao, Y., Wang, H., Geng, Z., Feng, F., He, X., and Wu, P. Removing hidden confounding in recommendation: A unified multi-task learning approach. Neural Information Processing Systems (NeurIPS 2023), pp. 54614–54626, 2023.