Hongtao Wen*, Jianhang Yan*, Wanli Peng†, Yi Sun
Dalian University of Technology, China
Grasp pose estimation is an important issue for robots to interact with the real world. However, most of existing methods require exact 3D object models available beforehand or a large amount of grasp annotations for training. To avoid these problems, we propose TransGrasp, a category-level grasp pose estimation method that predicts grasp poses of a category of objects by labeling only one object instance. Specifically, we perform grasp pose transfer across a category of objects based on their shape correspondences and propose a grasp pose refinement module to further fine-tune grasp pose of grippers so as to ensure successful grasps. Experiments demonstrate the effectiveness of our method on achieving high-quality grasps with the transferred grasp poses.
Overview of the proposed TransGrasp
* H. Wen and J. Yan—Equal contributions.
If you find this work useful, please consider citing our work as follows:
@inproceedings{wen2022transgrasp,
title={Transgrasp: Grasp pose estimation of a category of objects by transferring grasps from only one labeled instance},
author={Wen, Hongtao and Yan, Jianhang and Peng, Wanli and Sun, Yi},
booktitle={European Conference on Computer Vision},
pages={445-461},
year={2022},
organization={Springer}
}
}