Learning to Regrasp by Learning to Place


Shuo Cheng, Kaichun Mo, Lin Shao

[paper][code][data]


Abstract

In this paper, we explore whether a robot can learn to regrasp a diverse set of objects to achieve various desired grasp poses. Regrasping is needed whenever a robot's current grasp pose fails to perform desired manipulation tasks. Endowing robots with such an ability has applications in many domains such as manufacturing or domestic services. Yet, it is a challenging task due to the large diversity of geometry in everyday objects and the high dimensionality of the state and action space. In this paper, we propose a system for robots to take partial point clouds of an object and the supporting environment as inputs and output a sequence of pick-and-place operations to transform an initial object grasp pose to the desired object grasp poses. The key technique includes a neural stable placement predictor and a regrasp graph based solution through leveraging and changing the surrounding environment. We introduce a new and challenging synthetic dataset for learning and evaluating the proposed approach. We demonstrate the effectiveness of our proposed system with both simulator and real-world experiments.

Bibtex

@inproceedings{

cheng2021learning,

title={Learning to Regrasp by Learning to Place},

author={Shuo Cheng and Kaichun Mo and Lin Shao},

booktitle={5th Annual Conference on Robot Learning },

year={2021},

url={https://openreview.net/forum?id=Qdb1ODTQTnL}

}

Dataset

We collect the placement dataset by randomly placing objects with respect to the environment, and running the simulation to check whether the objects keep static. To make the stable placements robust to variant dynamics and geometry, we randomize the dynamic parameters including friction, mass, and external forces.

Our dataset contains 50 objects and 30 supports, with about 1 million stable and unstable poses in total. 

For more details, please refer to the README.md in the dataset folder.

We visualize some random selected stable placements in our dataset:

Technical Approach

Our system gradually constructs a search graph for finding a valid sequence of object pick and place operations to allow the robot reach the final target grasp pose.

Given partial point clouds of the support and object, and a random variable drawn from Gaussian distribution, the pose proposal network generates diverse proposals of stable placement;

The pose classifier then predicts the stable probability, contact probability, and displacements from every points to the nearest contact point.

Experiments

Visualization of generated stable object placements on real-world data:

Real world regrasping demo:

corl_real_world_video.mp4

We visualize some randomly generated stable placements on the synthetic data, with the predicted stable score marked at the bottom.

Acknowledgements

We would like to thank the UnitX company for providing us the UR5 robot arm used in the real robot experiments. We thank Shenli Yuan for helping us set up the customized two-fingered gripper, and Hongzhuo Liang for helping us set up the network configuration.

Contacts

If you have any questions, please feel free to contact Shuo Cheng: shuocheng@gatech.edu.