UniGrasp: Learning a Unified Model to Grasp with

Multi-fingered Robotic Hands

Lin Shao, Fabio Ferreira*, Mikael Jorda*, Varun Nambiar*, Jianlan Luo, Eugen Solowjow,

Juan Aparicio Ojea, Oussama Khatib, Jeannette Bohg

Abstract

To achieve a successful grasp, gripper attributes such as its geometry and kinematics play a role as important as the object geometry. The majority of previous work has focused on developing grasp methods that generalize over novel object geometry but are specific to a certain robot hand. We propose UniGrasp, an efficient data-driven grasp synthesis method that considers both the object geometry and gripper attributes as inputs. UniGrasp is based on a novel deep neural network architecture that selects sets of contact points from the input point cloud of the object. The proposed model is trained on a large dataset to produce contact points that are in force closure and reachable by the robot hand. By using contact points as output, we can transfer between a diverse set of multifingered robotic hands. Our model produces over 90% valid contact points in Top10 predictions in simulation and more than 90% successful grasps in real world experiments for various known two-fingered and three-fingered grippers. Our model also achieves 93%, 83% and 90% successful grasps in real world experiments for an unseen two-fingered gripper and two unseen multi-fingered anthropomorphic robotic hands.

Overview

Video

Bibtex

@article{shao2020unigrasp,

title={UniGrasp: Learning a Unified Model to Grasp With Multifingered Robotic Hands},

author={Shao, Lin and Ferreira, Fabio and Jorda, Mikael and Nambiar, Varun and Luo, Jianlan and Solowjow, Eugen and Ojea, Juan Aparicio and Khatib, Oussama and Bohg, Jeannette},

journal={IEEE Robotics and Automation Letters},

volume={5},

number={2},

pages={2286--2293},

year={2020},

publisher={IEEE},

doi={10.1109/LRA.2020.2969946}}

Research Supported by