Skill Transfer via Partially Amortized Hierarchical Planning


Kevin Xie*, Homanga Bharadhwaj*, Danijar Hafner, Animesh Garg, Florian Shkurti

International Conference on Learning Representations (ICLR) 2021

*Kevin and Homanga contributed equally.

To quickly solve new tasks in complex environments, intelligent agents need to build up reusable knowledge. For example, a learned world model captures knowledge about the environment that applies to new tasks. Similarly, skills capture general behaviors that can apply to new tasks. In this paper, we investigate how these two approaches can be integrated into a single reinforcement learning agent. Specifically, we leverage the idea of partial amortization for fast adaptation at test time. For this, actions are produced by a policy that is learned over time while the skills it conditions on are chosen using online planning. We demonstrate the benefits of our design decisions across a suite of challenging locomotion tasks and demonstrate improved sample efficiency in single tasks as well as in transfer from one task to another, as compared to competitive baselines.


Video Skill Visualizations

We select arbitrary skills from skill space and fix them throughout 100 environment steps. The videos below show that the policy learns meaningful and distinct behaviours conditioned on the skill. Note that the agents are fully deterministic and variations are only due to the skill conditioning.

Quadruped Walk: Skills Within the Unit Ball


Quadruped Walk: Skills Outside the Unit Ball

Transfer to Quadruped Reach

We transfer the learned low level skills to a target reaching task. The red spot represents the target and is randomly sampled in the arena.

Random Obstacle Walk Pretransfer Task

The agent is pretrained to walk at constant speed around random obstacle configuration in the arena.

Cove Obstacle Reach Target Transfer Target Task

Dreamer after 80 episodes

LSP after 80 episodes

Dreamer transfers the world model, policy and value function. LSP transfers the world model subskill policy and value function.