GenDexGrasp:
Generalizable Dexterous Grasping
Puhao Li*, Tengyu Liu*, Yuyang Li, Yiran Geng, Yixin Zhu, Yaodong Yang, Siyuan Huang
International Conference on Robotics and Automation 2023
Abstract
Generating dexterous grasping has been a long-standing and challenging robotic task. Despite recent progress, existing methods primarily suffer from two issues. First, most prior arts focus on a specific type of robot hand, lacking generalizable capability of handling unseen ones. Second, prior arts oftentimes fail to rapidly generate diverse grasps with a high success rate. To jointly tackle these challenges with a unified solution, we propose GenDexGrasp, a novel hand-agnostic grasping algorithm for generalizable grasping. GenDexGrasp is trained on our proposed large-scale multi-hand grasping dataset MultiDex synthesized with force closure optimization. By leveraging the contact map as a hand-agnostic intermediate representation, GenDexGrasp efficiently generates diverse and plausible grasping poses with a high success rate and can transfer among diverse multi-fingered robotic hands. Compared with previous methods, GenDexGrasp achieves a three-way trade-off among success rate, inference speed, and diversity.
Introduction Video
Methods
Pipeline
We first collect a large-scale synthetic dataset for multiple hands with Differentiable Force Closure(DFC). See MultiDex Dataset for data and more information. We then train a CMap-CVAE to generate hand-agnostic contact maps for unseen objects. We finally optimize grasping poses for unseen hands using the generated contact maps.
Aligned Distance for Contact
Distinguishing Contact and Non-Contact Areas
Instead of using Euclidean distance, we propose an aligned distance to measure the distance between the object’s surface point and the hand surface.
Euclidean Distance
Aligned Distance
Contact Map with ED
Contact Map with AD
Comparison between aligned and euclidean distances on thin shell objects.
Grasps in the Table-Top Setting
Our approach can be applied to tabletop objects after proper training. We showcase the generated grasps for Shadow Hand:
More Experiments
Quantitative Results
We compare GenDexGrasp with DFC, GraspCVAE(GC), and UniGrasp(UniG.) in Tab. I. Our method achieves a slightly lower success rate than DFC and UniGrasp top-1 but can generate diverse grasping poses in a short period of time, achieving a good three-way trade-off among quality, diversity, and speed.
We examine the efficacy of the proposed aligned distance in Tab. II. Specifically, we evaluate the success rate and diversity of the full model (full) and the full model with Euclidean distance contact maps (-align). In all three cases, we observe that using the Euclidean distance lowers the success rate significantly while improving the diversity slightly.
We further compare the performances of GenDexGrasp on seen and unseen hands in Tab. III. Our result shows that our method is robust in out-of-domain scenarios for various hand structures.
Successful Grasps
Examples of the generated grasping poses for unseen hands and objects. From top to bottom: Barrett, Allegro, and ShadowHand:
Failed Grasps
Failure cases with Allegro (top) and ShadowHand (bottom). The last two columns show artifacts caused by contact ambiguities when using Euclidean distances instead of aligned distances:
Conclusions
We introduce GenDexGrasp, a versatile dexterous grasping method that can be generalized to unseen hands. By leveraging the contact map representation as the intermediate representation, a novel aligned distance for measuring hand-to-point distance, and a novel grasping algorithm, GenDexGrasp can generate diverse and high-quality grasping poses in reasonable inference time. The quantitative experiment suggests that our method is the first generalizable grasping algorithm to properly balance quality, diversity, and speed. In addition, we contribute MultiDex, a large-scale synthetic dexterous grasping dataset. MultiDex features diverse grasping poses, a wide range of household objects, and five robotic hands with diverse kinematic structures.