Hierarchical RL Workshop

NIPS 2017

Grand Ballroom A

Long Beach Convention Center, CA, USA

Saturday, December 9, 2017

Update: video recordings of the talks are now available here

Reinforcement Learning (RL) has become a powerful tool for tackling complex sequential decision-making problems. It has been shown to train agents to reach super-human capabilities in game-playing domains such as Go and Atari. RL can also learn advanced control policies in high-dimensional robotic systems. Nevertheless, current RL agents have considerable difficulties when facing sparse rewards, long planning horizons, and more generally a scarcity of useful supervision signals. Unfortunately, the most valuable control tasks are specified in terms of high-level instructions, implying sparse rewards when formulated as an RL problem. Internal spatio-temporal abstractions and memory structures can constrain the decision space, improving data efficiency in the face of scarcity, but are likewise challenging for a supervisor to teach.

Hierarchical Reinforcement Learning (HRL) is emerging as a key component for finding spatio-temporal abstractions and behavioral patterns that can guide the discovery of useful large-scale control architectures, both for deep-network representations and for analytic and optimal-control methods. HRL has the potential to accelerate planning and exploration by identifying skills that can reliably reach desirable future states. It can abstract away the details of low-level controllers to facilitate long-horizon planning and meta-learning in a high-level feature space. Hierarchical structures are modular and amenable to separation of training efforts, reuse, and transfer. By imitating a core principle of human cognition, hierarchies hold promise for interpretability and explainability.

There is a growing interest in HRL methods for structure discovery, planning, and learning, as well as HRL systems for shared learning and policy deployment. The goal of this workshop is to improve cohesion and synergy among the research community and increase its impact by promoting better understanding of the challenges and potential of HRL. This workshop further aims to bring together researchers studying both theoretical and practical aspects of HRL, for a joint presentation, discussion, and evaluation of some of the numerous novel approaches to HRL developed in recent years.


The workshop is generously sponsored by Intel and DeepMind

Invited speakers: Pieter Abbeel (OpenAI/UCB), Matt Botvinick (DeepMind/UCL), Emma Brunskill (Stanford), Jan Peters (TU Darmstadt), Doina Precup (McGill), Jürgen Schmidhuber (IDSIA), David Silver (DeepMind/UCL). See bios here.

Accepted speakers: Nicholas Denis (Ottawa), Anna Harutyunyan (VU Brussel), Xiangyu Kong (Peking), Saurabh Kumar (GeorgiaTech), Shayegan Omidshafiei (MIT), Melrose Roderick (Brown)

The workshop will be held on Saturday, December 9, in Grand Ballroom A.

Program Committee

We thank the program committee for their helpful reviews:

  • Joshua Achiam (UCB/OpenAI)
  • Pierre-Luc Bacon (McGill)
  • Diana Borsa (DeepMind)
  • Carlos Florensa (UCB)
  • Roy Fox (UCB)
  • Francisco Garcia (UMass)
  • David Held (CMU)
  • Bernhard Hengst (UNSW)
  • George Konidaris (Brown)
  • Sanjay Krishnan (UCB)
  • Richard Liaw (UCB)
  • Marlos Machado (UAlberta)
  • Daniel Mankowitz (Technion)
  • Clemens Rosenbaum (UMass)
  • Tom Schaul (DeepMind)

Organizing Committee: Andrew Barto (UMass), Doina Precup (McGill), Shie Mannor (Technion), Tom Schaul (DeepMind), Roy Fox (UCB), Carlos Florensa (UCB). See bios here.

Advisory Committee: Ken Golberg (UCB), Pieter Abbeel (OpenAI/UCB), Roberto Calandra (UCB). See bios here.

Submission deadline: Friday, November 3, 2017, 23:00 UTC.

See Call for Papers here.