Model Based Inverse Reinforcement Learning from Visual Demonstrations
Neha Das, Sarah Bechtle, Todor Davchev, Dinesh Jayaraman, Akshara Rai, Franziska Meier
Goal: Learn Manipulation from Visual Human Demos
Visual demo of the task by human
Execution of the task by the robot
The human demonstrator executes the task of placing a bottle on the table. The image sequence from this demonstration is collected and is an input to our framework.
Utilizing our approach of Model Based IRL, the KUKA Arm is able to learn to place the bottle appropriately from the given visual demo even when the starting pose differs considerably from the demo.
Paper Abstract
Scaling model-based inverse reinforcement learning (IRL) to real robotic manipulation tasks with unknown dynamics remains an open problem. The key challenges lie in learning good dynamics models, developing algorithms that scale to high-dimensional state-spaces and being able to learn from both visual and proprioceptive demonstrations. In this work, we present a gradient-based inverse reinforcement learning framework that utilizes a pre-trained visual dynamics model to learn cost functions when given only visual human demonstrations. The learned cost functions are then used to reproduce the demonstrated behavior via visual model predictive control. We evaluate our framework on hardware on two basic object manipulation tasks.
Details
Click to know more about:
Links
Paper Link: https://arxiv.org/abs/2010.09034
Code: Our code is a part of the LearningToLearn repository @ FacebookResearch and can be found in https://github.com/facebookresearch/LearningToLearn/tree/main/mbirl