Cross-Embodiment Dexterous Grasping with Reinforcement Learning
Haoqi Yuan, Bohan Zhou, Yuhui Fu, Zongqing Lu
PKU, BAAI
Haoqi Yuan, Bohan Zhou, Yuhui Fu, Zongqing Lu
PKU, BAAI
Overview
We propose CrossDex, learning a cross-embodiment policy for dexterous grasping. The learned RL policy can grasp diverse objects with a variety of dexterous hands and transfer to hands not seen during training.
CrossDex employs a unified observation and action space to facilitate the learning of a universal policy across various dexterous hands. Rather than relying on joint angles specific to each hand, our policy utilizes the positions of the fingertips and palm to discern the spatial relationship between the hand and the object. Actions are represented using eigengrasps from the MANO hand model, which are mapped to position targets of each hand's PD controller through a retargeting process. This design, akin to teleoperation, enables consistent control across different dexterous hands. The policy is trained using reinforcement learning within a cross-embodiment simulation environment built on IsaacGym. To learn a vision-based policy, we substitute the object pose in this pipeline with the object's point cloud.
Sim-to-Real Experiments
Simulation
Real
Videos showing sim-to-real deployment of the learned vision-based policy on our hardware platform, using the LEAP Hand, a 6-DoF robot arm, and RealSense D435 cameras.
Failure Cases
The robot arm collides with the table.
The cube is completely occluded by the hand during grasping.
Grasping an unseen lightweight paper cup.
Additional Videos in Simulation
mustard bottle
mug
toy
cup
banana
apple
Citation
@article{yuan2024cross,
title={Cross-Embodiment Dexterous Grasping with Reinforcement Learning},
author={Yuan, Haoqi and Zhou, Bohan and Fu, Yuhui and Lu, Zongqing},
journal={arXiv preprint arXiv:2410.02479},
year={2024}
}