Using and Contributing Data

NOTE: The new website for the Bridge Data can be found here and contains the latest dataset and code. 

Downloading the dataset

The dataset is available in full resolution for download on a Google Cloud bucket (raw images, 100GB), and on Berkeley Servers (mp4 files, 34GB, dataset subset).

A downsampled version (128x128) in numpy format, compatible with rl-kit is available here.

Codebase

Code (Model Training) / Code (Robot Infrastructure)

Contributing Data

Anyone is welcome to contribute data to this growing public dataset. To maintain consistency and high quality of data we are listing instructions for how to collect this kind of robotic data. If you would like to contribute data, please contact Frederik Ebert or Yanlai Yang.

Procurement Instructions:

Equipment Purchasing Links

Logitech HD Pro Webcam C920

Oculus Quest 2 VR Headset & Controller

WidowX 250s Robot 6Dof


Environment Purchasing Links

Extra Kitchen Objects

Toy Sink 1

Toy Sink 2

Toy Sink 3

Toy Kitchen 1

Toy Kitchen 2

Toy Kitchen 3

Toy Kitchen 4

Toy Kitchen 5

Toy Kitchen 6

Data collection Instructions

For data collection, we use between 3 and 5 cameras. One camera is fixed relative to the robot, and the rest are positioned at both the left and the right sides of the fixed camera.  For control experiments, we only used the fixed camera.

At the beginning of each trajectory, we position the arm in about 5 to 20 cm distance to the target object.

The distractor object in the kitchen are randomized every 5 trajectories, the position of the kitchen relative to the robot is randomized every 25 trajectories, as well as the positions of the non-fixed cameras.

We will release more instructions about running data collection and control experiments along with our code upon acceptance.