REPLAB: A Reproducible Low-Cost Arm Benchmark Platform For Robotic Learning
Brian Yang, Jesse Zhang, Vitchyr Pong, Sergey Levine, and Dinesh Jayaraman
Brian Yang, Jesse Zhang, Vitchyr Pong, Sergey Levine, and Dinesh Jayaraman
Standardized evaluation measures have aided in the progress of machine learning approaches in disciplines such as computer vision and machine translation. In this paper, we make the case that robotic learning would also benefit from benchmarking, and present the "REPLAB" platform for benchmarking vision-based manipulation tasks. REPLAB is a reproducible and self-contained hardware stack (robot arm, camera, and workspace) that costs about 2000 USD, occupies a cuboid of size 70x40x60 cm, and permits full assembly within a few hours. Through this low-cost, compact design, REPLAB aims to drive wide participation by lowering the barrier to entry into robotics and to enable easy scaling to many robots. We envision REPLAB as a framework for reproducible research across manipulation tasks, and as a step in this direction, we define a template for a grasping benchmark consisting of a task definition, evaluation protocol, performance measures, and a dataset of 92k grasp attempts. We implement, evaluate, and analyze several previously proposed grasping approaches to establish baselines for this benchmark. Finally, we also implement and evaluate a deep reinforcement learning approach for 3D reaching tasks on our REPLAB platform.
Full Technical Report: https://arxiv.org/abs/1905.07447
A short version of this report was published at ICRA 2019 as: Brian Yang, Dinesh Jayaraman, and Sergey Levine, "REPLAB: A Reproducible Low-Cost Arm Benchmark for Robotic Learning".
To hear news about major updates from us, please sign up at: https://forms.gle/yy7RLa2U6ioxWcJKA
Templates (link) include:
If you do not have access to a laser cutter, then we recommend using one of many available online laser cutting services.
Instructions for cell assembly can be found on the Assembly page.
Before running the Docker image, the camera drivers for the SR300 Intel Realsense camera need to be set up. Detailed instructions for setting up the drivers can be found here.
Pull our Docker image for operating a cell directly with:
docker pull bhyang12/replab
To run the image:
docker run -it --rm --privileged bhyang12/replab
If operating multiple rigs on one machine, you may need to manually specify which ports/devices the Docker container can access. As an example:
docker run -it --rm --device=/dev/video0 --device=/dev/video1 --device=/dev/ttyUSB0 bhyang12/replab
To run with GPU and display port access (requires nvidia-docker 2), use:
docker run --runtime=nvidia -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e NVIDIA_VISIBLE_DEVICES=all -it --net=host --env="DISPLAY" --volume="$HOME/.Xauthority:/root/.Xauthority:rw" --rm --privileged bhyang12/replab
Instructions for using our provided scripts can be found in our GitHub repository here.
Shared dataset of grasps (download link) [92k grasps, 73GB zip file, 226GB unzipped]
Contact replabbenchmark@gmail.com for any questions or concerns.