Synergies Between Affordance and Geometry:

6-DoF Grasp Detection via Implicit Representations
[Paper] [Code]

Zhenyu Jiang, Yifeng Zhu, Maxwell Svetlik, Kuan Fang, Yuke Zhu

Abstract

Grasp detection in clutter requires the robot to reason about the 3D scene from incomplete and noisy perception. In this work, we draw insight that 3D reconstruction and grasp learning are two intimately connected tasks, both of which require a fine-grained understanding of local geometry details. We thus propose to utilize the synergies between grasp affordance and 3D reconstruction through multi-task learning of a shared representation. Our model takes advantage of deep implicit functions, a continuous and memory-efficient representation, to enable differentiable training of both tasks. We train the model on self-supervised grasp trials data in simulation. Evaluation is conducted on a clutter removal task, where the robot clears cluttered objects by grasping them one at a time. The experimental results in simulation and on the real robot have demonstrated that the use of implicit neural representations and joint learning of grasp affordance and 3D reconstruction have led to state-of-the-art grasping results. Our method outperforms baselines by over 10% in terms of grasp success rate.

Synergies between Affordance and Geometry

We harness the synergies between affordance and geometry for 6-DoF grasp detection in clutter. Our model jointly learns grasp affordance prediction and 3D reconstruction. Supervision from reconstruction facilitates our model to learn geometrically-aware features for accurate grasps in occluded regions from partial observation. Supervision from the grasp, in turn, produces better 3D reconstruction in graspable regions.

Method Overview

Model architecture of Grasp detection via Implicit Geometry and Affordance (GIGA). The input is a TSDF fused from the depth image. After a 3D convolution layer, the output 3D voxel features are projected to canonical planes and aggregated into 2D feature grids. After passing each of the three feature planes through three independent U-Nets, we query the local feature at grasp center/occupancy query point with bilinear interpolation. The affordance implicit functions predict grasp parameters from the local feature at the grasp center. The geometry implicit function predicts occupancy probability from the local feature at the query point.

Self-supervised Training Data Collection

Data generation for packed scene

Data generation for pile scene

Simulated Grasp

Simulated grasp in packed scenes

Simulated grasp in pile scenes

Real Robot Experiments

Clutter Removal

Decluttering packed objects (10X speed)

Decluttering piled objects (10X speed)

Comparison with baseline

  1. VGN shows two consecutive failures and gives up this decluttering round.

  1. GIGA first grasps the partially occluded bottle.

  2. GIGA successfully grasps and removes the left objects.