Learning Visual Shape Control of Novel 3D Deformable Objects from Partial-View Point Clouds
Abstract:
If robots could reliably manipulate the shape of 3D deformable objects, they could find applications in fields ranging from home care to warehouse fulfillment to surgical assistance. Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object’s shape. Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models. We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the object being manipulated and a point cloud of the goal shape to learn a low-dimensional representation of the object shape. This shape embedding enables the robot to learn to define a visual servo controller that provides Cartesian pose changes to the robot end-effector causing the object to deform towards its target shape. Crucially, we demonstrate both in simulation and on a physical robot that DeformerNet reliably generalizes to object shapes and material stiffness not seen during training and outperforms comparison methods for both the generic shape control and the surgical task of retraction.
Our paper is ACCEPTED to ICRA 2022!!! It is now available on arxiv! https://arxiv.org/abs/2110.04685
Videos:
Sample shape servoing sequences:
Code:
Dataset:
Other materials:
1. Surgical Retraction algorithm details:
We present in Algorithm 1 our approach to translate the hand specified plane into a goal point cloud interpretable by our DeformerNet shape servoing algorithm. We use RANSAC to find a dominate plane in the object cloud (line 4) and then find the minimum rotation to align this plane with the target plane (line 5). We then apply this estimated transform to any points not lying on the correct side of the plane and set this as the target cloud along with the points currently satisfying the goal (lines 5-7). If after reaching the goal point cloud any part of the object still resides on the wrong side of the plane, we shift the target plane further into the goal region along the planes' normals and repeat the entire process (line 13).