Multi-modal transfer learning for grasping transparent and specular objects

Thomas Weng, Amith Pallankize, Yimin Tang, Oliver Kroemer, David Held

RA-L publication with ICRA 2020 presentation

[Paper]

Abstract

State-of-the-art object grasping methods rely on depth sensing to plan robust grasps, but commercially available depth sensors fail to detect transparent and specular objects. To improve grasping performance on such objects, we introduce a method for learning a multi-modal perception model by bootstrapping from an existing uni-modal model. This transfer learning approach requires only a pre-existing uni-modal grasping model and paired multi-modal image data for training, foregoing the need for ground-truth grasp success labels nor real grasp attempts. Our experiments demonstrate that our approach is able to reliably grasp transparent and reflective objects.

Selected Press Coverage:

Grasping Opaque Clutter

Grasping Transparent Clutter

Grasping Specular Clutter

ICRA 2020 Presentation Video

RA-L + ICRA Submission Video

Acknowledgements

This work was supported by the National Science Foundation Smart and Autonomous Systems Program (IIS-1849154), the Sony Corporation, the Office of Naval Research (N00014-18-1-2775), the NSF Graduate Research Fellowship Program (DGE-1745016), the Efort Intelligent Equipment Company, and ShanghaiTech University. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the ONR and NSF.