Sub-policy Adaptation for Hierarchical Reinforcement Learning

Alexander C. Li*, Carlos Florensa*, Ignasi Clavera, Pieter Abbeel

UC Berkeley

* equal contribution

Published at ICLR, 2020

[Paper] [Slides] [GitHub Code]

Abstract

Hierarchical Reinforcement Learning is a promising approach to long-horizon decision-making problems with sparse rewards. Unfortunately, most methods still decouple the lower-level skill acquisition process and the training of a higher level that controls the skills in a new task. Treating the skills as fixed can lead to significant sub-optimality in the transfer setting. In this work, we propose a novel algorithm to discover a set of skills, and continuously adapt them along with the higher level even when training on a new task. Our main contributions are two-fold: first, we derive a new hierarchical policy gradient, and introduce Hierarchical Proximal Policy Optimization (HiPPO) to efficiently train all levels of the hierarchy simultaneously. Second, we propose a method of training time-abstractions that improves the robustness of the obtained skills to environment changes. Our method outperforms standard PPO when learning from scratch (interpretable as fine-tuning randomly initialized sub-policies), and is able to adapt skills learned through unsupervised pre-training to preference changes in downstream environments.

Videos

Presentation

HiPPO - ICLR (public)

Bibtex

@inproceedings{
    Li2020Sub-policy,
    title={Sub-policy Adaptation for Hierarchical Reinforcement Learning},
    author={Alexander Li and Carlos Florensa and Ignasi Clavera and Pieter Abbeel},
    booktitle={International Conference on Learning Representations},
    year={2020},
    url={https://openreview.net/forum?id=ByeWogStDS}
}