RLBench

Rearrangement: A Challenge for Embodied AI

The RLBench Rearrangement Challenge

A fixed robotic arm manipulator (Franka Panda arm with Franka gripper) is tasked with picking up a randomly scattered set of grocery objects on a table and placing them into a constrained shelf space. The target location for all of the objects is defined as the same volume above the shelf, rather than a specific goal for each object. This leads to interesting emergent difficulty as objects will be increasingly difficult to place within the volume as the shelf becomes more occupied, and the best solutions require planning of the best order and placement for moving all of the objects. The sensory suite includes colour camera and depth sensors mounted on the wrist, hand and over-the shoulder, as well as proprioceptive sensors including joint encoders and forces. Below is the video of the task being performed.

Click here for code which implements this rearrangement task in RLBench.

Click here for the RLBench GitHub page, which includes install and run instructions.

put_all_groceries_in_cupboard.avi

What is RLBench?

RLBench is an ambitious large-scale benchmark and learning environment featuring 100 unique, hand-design tasks, tailored to facilitate research in a number of vision-guided manipulation research areas, including: reinforcement learning, imitation learning, multi-task learning, geometric computer vision, and in particular, few-shot learning.

Paper: https://arxiv.org/abs/1909.12271

Code: https://github.com/stepjam/RLBench