Randomized-to-Canonical Adaptation Networks

Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks

Abstract

Real world data, especially in the domain of robotics, is notoriously costly to collect. One way to circumvent this can be to leverage the power of simulation in order to produce large amounts of labelled data. However, training models on simulated images does not readily transfer to real-world ones. Using domain adaptation methods to cross this 'reality-gap' requires at best a large amount of unlabelled real-world data, whilst domain randomization alone can waste modeling power, rendering certain reinforcement learning (RL) methods unable to learn the task of interest. In this paper, we present a novel approach to crossing the visual reality-gap that uses no real-world data by learning to translate randomized rendered images into their equivalent non-randomized, canonical versions. This in-turn allows for real images to also be translated into canonical sim images. We show that imposing such structure increases the power of randomization while making the task easier for the downstream model. We demonstrate the effectiveness of this sim-to-real approach by training a vision-based closed-loop grasping reinforcement learning agent in simulation, and then transferring it to the real word to attain 66% zero-shot grasp success on unseen objects. Additionally, by finetuning in the real-world with only 5,000 real-world grasps, our method achieves 86%, performance that is comparable to a state-of-the-art system trained with 580,000 real-world grasps, resulting in a reduction of real-world data by more than 99%.

Real-to-Canonical Example Videos

Left: Real RGB Centre: Generated RGB Right: Generated Mask

Note that these videos are from evaluation and are dealing with objects that have never been seen before.

robot1_sim_to_real_.mp4
robot4_sim_to_real_.mp4

Re-grasping ability is shown in the above 2 videos.

robot3_sim_to_real_.mp4
robot2_sim_to_real_.mp4

In the above right video, note how the generator is able to correctly segment the grey octopus from the tray (despite being a similar color to the tray).

Also note how shadows are transformed into the canonical version.

Real-to-Canonical Example Images

Figures show several examples of the input RGB (1st row), generated RGB (2nd row), generated mask (3rd row), and generated depth (4th row).