Hierarchical Imitation and Reinforcement Learning

Hoang M. Le, Nan Jiang, Alekh Agarwal, Miro Dudík, Yisong Yue, Hal Daumé III

California Institute of Technology, Microsoft Research - New York, University of Maryland - College Park

Abstract: We study how to effectively leverage expert feedback to learn sequential decision-making policies. We focus on problems with sparse rewards and long time horizons, which typically pose significant challenges in reinforcement learning. We propose an algorithmic framework, called hierarchical guidance, that leverages the hierarchical structure of the underlying problem to integrate different modes of expert interaction. Our framework can incorporate different combinations of imitation learning (IL) and reinforcement learning (RL) at different levels, leading to dramatic reductions in both expert effort and cost of exploration. Using long-horizon benchmarks, including Montezuma’s Revenge, we demonstrate that our approach can learn significantly faster than hierarchical RL, and be significantly more label-efficient than standard IL. We also theoretically analyze labeling cost for certain instantiations of our framework.

Paper: Link to arXiv version

Code: Implementation for Maze domain and Montezuma's Revenge

Presentation at the International Conference on Machine Learning (ICML 2018)

Hierarchical Imitation Learning

Hierarchically Guided DAgger (hg-DAgger) and Hierarchical Behavior Cloning (h-BC)

Hierarchical Hybrid Imitation - Reinforcement Learning

Hierarchically Guided DAgger / Q-Learning (hg-DAgger/Q)

Maze Navigation Domain

We have multiple random instances of the environment. The agent (white dot) is supposed to navigate to the destination in the yellow block, while avoiding all the obstacles (red)

Example Runs for Hierarchical DAgger on Maze Navigation

Montezuma's Revenge Atari Game - Subgoals Specification of the First Room

Sample Result for Hybrid Imitation - Reinforcement Learning

Here the meta-controller is trained with DAgger, and low-level controllers are learned with DDQN (Double Q Learning with prioritized experience replay)