Yunzhi Lin*, Chao Tang*, Fu-Jen Chu, and Patricio A. Vela
Georgia Institute of Technology, GA, U.S.A.
*The first two authors contributed equallyAbstract: A segmentation-based architecture is proposed to decompose objects into multiple primitive shapes from monocular depth input for robotic manipulation. The backbone deep network is trained on synthetic data with 6 classes of primitive shapes generated by a simulation engine. Each primitive shape is designed with parametrized grasp families, permitting the pipeline to identify multiple grasp candidates per shape primitive region. The grasps are priority ordered via proposed ranking algorithm, with the first feasible one chosen for execution. On task-free grasping of individual objects, the method achieves a 94% success rate. On task-oriented grasping, it achieves a 76% success rate. Overall, the method supports the hypothesis that shape primitives can support task-free and task-relevant grasp prediction.
Pre-print: arXiv
Code: GitHub
Grasp family for wide cylinder, tall cylinder, and semi-sphere:
Sample segmentation outcomes for test scenarios:
Supplementary Video:
The proposed deep network, segmentation-based pipeline to generate grasp candidate for an novel object:
Citation:
@inproceedings{lin2020using,
title={Using synthetic data and deep networks to recognize primitive shapes for object grasping},
author={Lin, Yunzhi and Tang, Chao and Chu, Fu-Jen and Vela, Patricio A},
booktitle={2020 IEEE International Conference on Robotics and Automation (ICRA)},
pages={10494--10501},
year={2020},
organization={IEEE}
}