Meta Automatic Curriculum Learning


Abstract

A major challenge in the Deep RL (DRL) community is to train agents able to generalize their control policy over situations never seen in training. Training on diverse tasks has been identified as a key ingredient for good generalization, which pushed researchers towards using rich procedural task generation systems controlled through complex continuous parameter spaces. In such complex task spaces, it is essential to rely on some form of Automatic Curriculum Learning (ACL) to adapt the task sampling distribution to a given learning agent, instead of randomly sampling tasks, as many could end up being either trivial or unfeasible. Since it is hard to get prior knowledge on such task spaces, many ACL algorithms explore the task space to detect progress niches over time. This costly tabula-rasa search process needs to be performed for each new learning agents, although they might have similarities in their capabilities profiles. To address this limitation, we introduce the concept of Meta-ACL, and formalize it in the context of black-box RL learners, i.e. algorithms seeking to generalize curriculum generation to an (unknown) distribution of learners. In this work, we present AGAIN, a first instantiation of Meta-ACL, and showcase its benefits for curriculum generation over classical ACL in multiple simulated environments including procedurally generated parkour environments with learners of varying morphologies.

Visualizations for experiments on the Walker-Climber environment


We test our approach on a new parametric Box2D environment featuring both a continuous task space and a multi-modal student distribution. By leveraging knowledge from previously trained students, we show that our Meta-ACL algorithm AGAIN is better able to train a set of 64 randomly drawn new DRL students compared to regular ACL (ALP-GMM) and random curriculum (Random).


In the following plots, each circle represents a test task (225 tasks uniformly distributed over the space), and its color represents how many of the 64 students managed to master the task after training. AGAIN better scaffolds its students, making them master more tasks than other teacher conditions.

Our proposed parametric Walker-Climber environment. At the beginning of training, a DRL agent is randomly embodied in either a climber or a walker morphology with randomized limb sizes.

Examples of locomotion policies learned by DRL students trained with AGAIN

4012_91.mp4
4023_63_walker.mp4
4001_67.mp4
4012_91_walker.mp4