One-Shot Visual Imitation Learning via Meta-Learning

Chelsea Finn*, Tianhe Yu*, Tianhao Zhang, Pieter Abbeel, and Sergey Levine

Conference on Robot Learning, 2017

Abstract: In order for a robot to be a generalist that can perform a wide range of jobs, it must be able to acquire a wide variety of skills quickly and efficiently in complex unstructured environments. High-capacity models such as deep neural networks can enable a robot to represent complex skills, but learning each skill from scratch then becomes infeasible. In this work, we present a meta-imitation learning method that enables a robot to learn how to learn more efficiently, allowing it to acquire new skills from just a single demonstration. Unlike prior methods for one-shot imitation, our method can scale to raw pixel inputs and requires data from significantly fewer prior tasks for effective learning of new skills. Our experiments on both simulated and real robot platforms demonstrate the ability to learn new tasks, end-to-end, from a single visual demonstration.

Paper: https://arxiv.org/pdf/1709.04905.pdf

Code: https://github.com/tianheyu927/mil

Supplementary Summary Video with Results

one_shot_imitation_short.mp4

Longer Explanatory Video with Results

one_shot_imitation_full.mp4