TieBot: Model-based Learning to Knot a Tie from Visual Demonstration via Differentiable Physics Simulation
Abstract:
The tie-knotting task is highly challenging due to the tie's high deformation and long-horizon manipulation actions. This work presents a model-based learning from demonstration system called TieBot for the robots to learn to knot a tie. We introduce the iterative keypoint learning and the hierarchical matching approach to estimate the tie's shape from the demonstrated video. With these estimated shapes in a differentiable cloth simulation used as subgoals, we propose to combine model-free RL and differentiable simulation to generate the action sequences to achieve these subgoals. Then, an imitation learning approach is used to learn the action sequences from raw point clouds in the simulation. Lastly, our pipeline learns a residual policy when the imitated policy is applied to real-world execution, mitigating the sim2real gap. We demonstrate the effectiveness of \emph{TieBot} in simulation and the real world. In the real-world experiment, a MOVO robot successfully knots the tie.
human demonstrations
first tie-knotting
second tie-knotting
towel-folding
feature matching examples
first tie-knotting
second tie-knotting
towel-folding
keypoint detection examples
first tie-knotting
second tie-knotting
real2sim result examples
first tie-knotting
second tie-knotting
towel-folding
real-world experiment
We test our real-world policy 10 times. Each time the initial positions of the tie are slightly perturbed about 5cm. The final success rate is 50%.