Residual Reinforcement Learning from Demonstrations


Minttu Alakuijala, Gabriel Dulac-Arnold, Julien Mairal, Jean Ponce and Cordelia Schmid

Google / Inria / Univ. Grenoble Alpes / Ecole Normale Supérieure

Abstract

Residual reinforcement learning (RL) has been proposed as a way to solve challenging robotic tasks by adapting control actions from a conventional feedback controller to maximize a reward signal. We extend the residual formulation to learn from visual inputs and sparse rewards using demonstrations. Learning from images, proprioceptive inputs and a sparse task-completion reward relaxes the requirement of full state features, such as object and target positions, being available. In addition, replacing the base controller with a policy learned from demonstrations removes the dependency on a hand-engineered controller in favour of a dataset of demonstrations, which can be provided by non-experts. Our experimental evaluation on simulated manipulation tasks on a 6-DoF UR5 arm and a 28-DoF dexterous hand demonstrates that residual RL from demonstrations is able to generalize to unseen environment conditions more flexibly than either behavioral cloning or RL fine-tuning, and is capable of solving high-dimensional, sparse-reward tasks out of reach for RL from scratch.

[Paper]

Supplementary video

RRLfD.mp4