Deep Reinforcement Learning for Vision-Based Robotic Grasping

A Simulated Comparative Evaluation of Off-Policy Methods

Deirdre Quillen*, Eric Jang*, Ofir Nachum*, Chelsea Finn, Julian Ibarz, and Sergey Levine

Google Brain Robotics and Berkeley EECS

In this paper, we explore deep reinforcement learning algorithms for vision-based robotic grasping. Model-free deep reinforcement learning (RL) has been successfully applied to a range of challenging environments, but the proliferation of algorithms makes it difficult to discern which particular approach would be best suited for a rich, diverse task like grasping. To answer this question, we propose a simulated benchmark for robotic grasping that emphasizes off-policy learning and generalization to unseen objects. Off-policy learning enables utilization of grasping data over a wide variety of objects, and diversity is important to enable the method to generalize to new objects that were not seen during training.

We evaluate the benchmark tasks against a variety of Q-function estimation methods, a method previously proposed for robotic grasping with deep neural network models, and a novel approach based on a combination of Monte Carlo return estimation and an off-policy correction. Our results indicate that several simple methods provide a surprisingly strong competitor to popular algorithms such as double Q-learning, and our analysis of stability sheds light on the relative tradeoffs between the algorithms.

This paper will be presented at the International Conference on Robotics and Automation (ICRA), May 2018.


Arxiv Paper

Grasping benchmark task

Random policy

Code for learned policies (coming soon)