3D Shape Estimation of Transparent Objects for Manipulation

Transparent objects are a common part of everyday life, yet they possess unique visual properties that make them incredibly difficult for standard 3D sensors to produce accurate depth estimates for. In many cases, they often appear as noisy or distorted approximations of the surfaces that lie behind them. To address these challenges, we present ClearGrasp -- a deep learning approach for estimating accurate 3D geometry of transparent objects from a single RGB-D image for robotic manipulation. Given a single RGB-D image of transparent objects, ClearGrasp uses deep convolutional networks to infer surface normals, masks of transparent surfaces, and occlusion boundaries. It then uses these outputs to refine the initial depth estimates for all transparent surfaces in the scene. To train and test ClearGrasp, we construct a large-scale synthetic dataset of over 50,000 RGB-D images, as well as a real-world test benchmark with 286 RGB-D images of transparent objects and their ground truth geometries. The experiments demonstrate that ClearGrasp is substantially better than monocular depth estimation baselines and is capable of generalizing to real-world images and novel objects. We also demonstrate that ClearGrasp can be applied out-of-the-box to improve grasping algorithms' performance on transparent objects. hmm


  • 24th Feb 2020 - New data available: 3D pose of transparent objects, their 3D CAD models and instance masks
  • 12th Feb 2020 - ClearGrasp featured on Google AI Blog: Learning to See Transparent Objects
  • 22nd Jan 2020 - Paper accepted at IEEE International Conference of Robotics and Automation (ICRA), 2020

Results and Comparison

Synthetic Dataset


Surface Normals

Occlusion Boundaries

Segmentation Masks

Real World Dataset

Transparent Frame

Opaque Frame

Transparent Depth

Opaque Depth