Causal learning for decision making

Causal learning for decision making workshop

ICLR 2020

Sunday, April 26, 2020


Addis Ababa, Ethiopia

Call for papers

Submission deadline: 10 Feb 2020.

Deadline extended to Tuesday February 11th, anywhere on earth.

Portal: https://openreview.net/group?id=ICLR.cc/2020/Workshop/CLDM


We are primarily seeking shorter extended abstracts (4 pages), but will consider long submissions (up to 8 pages) with more stringent criteria for acceptance. Please use the standard ICLR template, in anonymized format. Supplemental material is allowed but may not necessarily be reviewed.

A few papers may be selected as oral presentations, and the other accepted papers will be presented in a poster session. There will be no proceedings for this workshop, however, upon the author’s request, accepted contributions will be made available in the workshop website. Submission is double-blind and open to already published work.


We are interested in submissions that deal with learning causal models with reinforcement learning (sequential decision making) and using causal models for better decision making (planning), as well as real-world applications. We are also interested in learning abstract representations of causal models for decision making.

We welcome submissions related to the following topics:

  • Causal induction using reinforcement learning
  • Planning with causal models
  • Learning abstract representations for causal learning
  • Applications for using causal models in decision-making in real-world settings, for example in relation to fairness, transparency, and safety

Workshop Description

Deep Learning has enabled significant improvements in areas as diverse as computer vision, text understanding and reinforcement learning. However, a major challenge that still stands is the ability to generalize outside of the i.i.d setting, when we care about generalization or fast adaptation to distributions that are different from the main training distribution. It has been argued that this requires not only learning the statistical correlations within data, but the causal model underlying the data.

Causal models exploit the conditional distribution of the target variables given the corresponding direct causal predictors and must remain identical under interventions on variables other than the target variable. This invariance idea is closely linked to causality and has been discussed, for example, under the term ‘modularity’ (Pearl, 2009; Schölkopf et al., 2012). Hence, causal knowledge supports decision making in two ways: by allowing us to predict the consequences of different actions under the given circumstances and by helping to make diagnoses that suggest which interventions will be effective. If the data was really generated from the composition of independent causal mechanisms (Peters et al., 2017), then there exists a good factorization of knowledge that mimics that structure. If in addition, at each time step, agents in the real world tend to only be able to change one or very few high-level variables (or the associated mechanisms producing them), then assumption of small change (in the right representation) should be generally valid. Hence we should be able to obtain fast transfer, by recovering a good approximation of the true causal decomposition into independent mechanisms (to the extent that the observations and interventions can reveal those mechanisms).

The goal of this workshop is to investigate how much progress is possible by framing the learning problem beyond learning correlations, that is, by uncovering and leveraging causal relations.

Key questions to be addressed and discussed include:

  • What is the role of an underlying causal model in decision making?
  • What is the difference between a prediction that is made with a causal model and that with a non‐causal model?
  • The way current RL agents explore environments appears less intelligent than the way that human learners explore. One reason for this disparity may be that humans, when faced with a novel environment, do not merely observe, they also interact with the world and effect the world with actions. Maintaining a causal model of the world allows the learner to maintain plausible hypotheses and design experiments to test these hypotheses.
  • Maintaining a distributional belief about the agent's model of the world as a tool for exploration (minimize entropy, maximize knowledge acquisition).
  • The importance of causality to advantageous decision-making could also be potentially problematic as research into causal explanations has shown that people often have only rough, skeletal knowledge about causal mechanisms. Therefore people’s causal knowledge only allows for very rough and sometimes incorrect predictions of consequences. Given that our causal knowledge is incomplete or sometimes wrong, it might be harmful to try to base decisions on causal considerations.

Confirmed Speakers

  • Yoshua Bengio (Mila, University of Montreal, Canada)
  • Bernhard Schölkopf (Max Planck Institute, Tuebingen)
  • Lars Buesing (Deepmind, UK)
  • Alison Gopnik (University of California Berkeley, US)
  • Tobias Gerstenberg (Stanford University)

Slides

Forthcoming

Talk abstracts

Forthcoming

Organizers

  • Nan Rosemary Ke (MILA, University of Montreal)
  • Anirudh Goyal (MILA, University of Montreal)
  • Jane Wang (Deepmind)
  • Silvia Chiappa (Deepmind)
  • Jovana Mitrovic (DeepMind)
  • Stefan Bauer (Max Planck institute)
  • Theophane Weber (Deepmind)
  • Danilo Rezende (Deepmind)