Automatic Curriculum Learning through

Value Disagreement

Yunzhi Zhang, Pieter Abbeel, Lerrel Pinto

Abstract

Solving tasks that an agent has never solved before lies at the heart of behavior learning. Through reinforcement learning (RL), we have made massive strides towards solving tasks that have a single goal. However, in the multi-task domain, where an agent needs to reach multiple goals, current methods leave us wanting. One of the key reasons for poor performance on such goal-conditioned problems is that the order of presented goals is effectively random. When biological agents learn, there is often an organized and meaningful order to which learning happens. Inspired by this, we propose setting up an automatic curriculum for goals that our agent needs to solve. Our key insight is that if we can sample goals at the frontier of the learning of an agent, it will provide a significantly stronger learning signal compared to randomly sampled goals. To operationalize this idea, we introduce a goal proposal module that prioritizes goals that maximize the epistemic uncertainty of a learned Q-value function. This simple technique samples goals that are neither too hard nor too easy for the agent to solve hence enabling it to continually improve. We demonstrate that our method achieves significant performance gains compared to current state-of-the-art methods across 13 multi-goal robotic tasks and 5 navigation tasks.

Does value disagreement capture the learning frontier?

We illustrate the goal-conditioned policy returns, Q-values averaged over the ensemble and the associated goal sampling distribution for three maze environments with sparse reward. In the third column, goals in the darker region are associated with a higher probability to be sampled. The ground truth learning frontier is the boundary of light- and dark-red regions as in the first two columns. Empirically, goals lying on the frontier are more likely to be sampled and thus provide a strong learning signal for policy improvement.