TieBot: Learning to Knot a Tie from Visual Demonstration through a Real-to-Sim-to-Real Approach

Abstract:

The tie-knotting task is highly challenging due to the tie's high deformation and long-horizon manipulation actions. This work presents TieBot, a Real-to-Sim-to-Real learning from visual demonstration system for the robots to learn to knot a tie. We introduce the Hierarchical Feature Matching approach to estimate a sequence of tie's meshes from the demonstration video. With these estimated meshes used as subgoals, we propose to learn feasible action sequences to achieve these subgoals from point clouds with a teacher-student training paradigm in the simulation. Lastly, our pipeline learns a residual policy when the learned policy is applied to real-world execution, mitigating the Sim2Real gap. We demonstrate the effectiveness of TieBot in simulation and the real world. In the real-world experiment, a dual-arm robot successfully knots a tie.

human demonstrations

fullvideo_tie1_demo.mp4

first tie-knotting

fullvideo_tie2_demo.mp4

second tie-knotting

fullvideo_towel_demo.mp4

towel-folding

feature matching examples

first tie-knotting

second tie-knotting

towel-folding

keypoint detection examples

first tie-knotting

second tie-knotting

real2sim result examples

first tie-knotting

second tie-knotting

towel-folding

V2.0 cut.mov

real-world experiment

We test our real-world policy 10 times. Each time the initial positions of the tie are slightly perturbed about 5cm. The final success rate is 50%.