Learning to Smooth and Fold Real Fabric Using Dense Object Descriptors Trained on Synthetic Color Images

Aditya Ganapathi, Priya Sundaresan, Brijen Thananjeyan, Ashwin Balakrishna,

Daniel Seita, Jennifer Grannen, Minho Hwang, Ryan Hoque,

Joseph E. Gonzalez, Nawid Jamali, Katsu Yamane, Soshi Iba, Ken Goldberg


Abstract

Robotic fabric manipulation is challenging due to the infinite dimensional configuration space and complex dynamics. In this paper, we learn visual representations of deformable fabric by training dense object descriptors that capture correspondences across images of fabric in various configurations. The learned descriptors capture higher level. geometric structure, facilitating design of explainable policies. We demonstrate that the learned representation facilitates multi-step fabric smoothing and folding tasks on two real physical systems, the da Vinci surgical robot and the ABB YuMi given high level demonstrations from a supervisor. The system achieves a 78.8% average task success rate across six fabric manipulation tasks.


IROS 2020 Video Submission

IROS_2020_VideoSubmission_Comp.mp4

Descriptor Learning

We first learn fabric descriptors by leveraging point-pair correspondences between fabric in different configurations to learn a descriptor space which is invariant to fabric configuration by building off of prior work on learning dense object descriptors for deformable manipulation.

tshirt-correspondence.mp4

Example of the learned descriptors across two images of a pink t-shirt. The predicted correspondences are shown on the right.



cloth_transfer_sim_src_real_dest.mov

Example of the learned descriptors across an image of simulated cloth (left) and real cloth (center). The predicted correspondences are shown on the right.


cloth_real_transfer.mp4

Example of the learned descriptors across two real images of the cloth. The predicted correspondences are shown on the right.



Policies


Simulated Fabric Manipulation

We first rollout policies in a Blender simulation environment on square cloth and T-shirt folding tasks. We find that the descriptors are able to accurately localize correspondences in fabric with different colors and configurations and imitate folding sequences in novel fabric configurations with a single provided demonstration.


Physical Fabric Manipulation

We find that the learned policies transfer effectively to two different physical robotic systems, anABB YuMi and da Vinci surgical kit (dVRK), and can successfully perform fabric smoothing and folding tasks in novel configurations on both robots given a single demonstration of each task.