Abstract
Peg-in-hole (PiH) assembly is a fundamental yet challenging robotic manipulation task. While reinforcement learning (RL) has shown promise in tackling such tasks, it requires extensive exploration. In this paper, we propose a novel visual-tactile skill learning framework for the PiH task that leverages its inverse task, i.e., peg-out-of-hole (PooH) disassembly, to facilitate PiH learning. Compared to PiH, PooH is inherently easier as it only needs to overcome existing friction without precise alignment, making data collection more efficient. To this end, we formulate both PooH and PiH as Partially Observable Markov Decision Processes (POMDPs) in a unified environment with shared visual-tactile observation space. A visual–tactile PooH policy is first trained; its trajectories, containing kinematic, visual and tactile information, are temporally reversed and action-randomized to provide expert data for PiH. In the policy learning, visual sensing facilitates the peg–hole approach, while tactile measurements compensate for peg–hole misalignment. Experiments across diverse peg–hole geometries show that the visual–tactile policy attains 6.4% lower contact forces than its single-modality counterparts, and that our framework achieves average success rates of 87.5% on seen objects and 77.1% on unseen objects, outperforming direct RL methods that train PiH policies from scratch by 18.1% in success rate.
Experiments
Peg-out-of-Hole in Simulation
Peg-in-Hole in Simulation
Red Cube
Red Cylinder
Red Hexagon
White Cube
Quantitative Results in Simulation
Peg-in-hole success rates (95% Wilson CIs) under 0.5 mm clearance.
Peg-in-hole success rates (95% Wilson CIs) under 1.0 mm clearance.
Peg-in-hole success rates (95% Wilson CIs) under 2.0 mm clearance.
Sim-to-Real Policy Transfer
Red Cube
Red D-Shape
Red Cylinder
Red Hexagon
White Cube
Red Scalene Triangle
Real-world n successes / N of the proposed method across diverse objects.