Accepted at ICLR 2025
Developing agents for complex and underspecified tasks, where no clear objective exists, remains challenging but offers many opportunities. This is especially true in video games, where simulated players (bots) need to play realistically, and there is no clear reward to evaluate them. While imitation learning has shown promise in such domains, these methods often fail when agents encounter out-of-distribution scenarios during deployment. Expanding the training dataset is a common solution, but it becomes impractical or costly when relying on human demonstrations. This article addresses active imitation learning, aiming to trigger expert intervention only when necessary, reducing the need for constant expert input along training. We introduce Random Network Distillation DAgger (RND-DAgger), a new active imitation learning method that limits expert querying by using a learned state-based out-of-distribution measure to trigger interventions. This approach avoids frequent expert-agent action comparisons, thus making the expert intervene only when it is useful. We evaluate RND-DAgger against traditional imitation learning and other active approaches in 3D video games (racing and third-person navigation) and in a robotic locomotion task and show that RND-DAgger surpasses previous methods by reducing expert queries.
(a) The learner’s policy controls the agent until (b) our Random Network Distillation-based out-of-distribution (OOD) measure is triggered. Then (c) the expert takes control until the OOD measure gets lower than the threshold for at least W steps. (d) At last, the current policy takes control of the agent to continue the episode, and can trigger the expert again later.
Environments were accelerated when the learned policy was in control, significantly reducing the required time for human experts. This acceleration (or even parallelization) is feasible within the autonomous switching framework of RND-DAgger. HG-DAgger or other DAgger baselines necessitate frequent context switches, making it impossible or hard to benefit from similar accelerations.
Note that when RND-DAgger's out-of-distribution measure is above the set threshold, the gizmo ball turns red (else green), and when the expert has control, a red transparent capsule appears around the bot
Here, each "break" corresponds to a context switch: the measure detected an OOD state, and the oracle was requested to take control.