Abstract: A deep learning architecture is proposed to predict graspable locations for robotic manipulation. It considers situations where no, one, or multiple object(s) are seen. By defining the learning problem to be classification with null hypothesis competition instead of regression, the deep neural network with RGB-D image input predicts multiple grasp candidates for a single object or multiple objects, in a single shot. The method outperforms state-of-the-art approaches on the Cornell dataset with 96.0% and 96.1% accuracy on image-wise and object-wise splits, respectively. Evaluation on a multi-object dataset illustrates the generalization capability of the architecture. Grasping experiments achieve 96.0\% grasp localization and 89.0% grasping success rates on a test set of household objects. The real-time process takes less than .25s from image to plan.
RA-L with IROS2018: arxiv
Code: github
Dataset: github
Multi-object, multi-grasp in a single shot:
Comparison to SOTAs:
Proposed architecture: