LEEPS: Learning End-to-End Legged Perceptive Parkour Skills on Challenging Terrains
Tangyu Qian, Hao Zhang, Zhangli Zhou, Hao Wang, Mingyu Cai, and Zhen Kan
Tangyu Qian, Hao Zhang, Zhangli Zhou, Hao Wang, Mingyu Cai, and Zhen Kan
Abstract
Empowering legged robots with agile maneuvers is a great challenge. While existing works have proposed diverse control-based and learning-based methods, it remains an open problem to endow robots with animal-like perception and athleticism. Towards this goal, we develop an End-to-End Legged Perceptive Parkour Skill Learning (LEEPS) framework to train quadruped robots to master parkour skills in complex environments. In particular, LEEPS incorporates a vision-based perception module equipped with multi-layered scans, supplying robots with comprehensive, precise, and adaptable information about their surroundings. Leveraging such visual data, a position-based task formulation liberates the robot from velocity constraints and directs it toward the target using innovative reward mechanisms. The resulting controller empowers an affordable quadruped robot to successfully navigate previously challenging and unprecedented obstacles.We evaluate LEEPS on various challenging tasks, which demonstrate its effectiveness, robustness, and generalizability.
Method
Main Results
1. Parkour Performance on Nine Challenging Terrains
Rock Jumbles
Discrete Rough
Slope
Stair
Stepping Stones
Tunnel
Barrier
Gap
Log Bridge
2. Visualization of the Perception Module
Visualization of Multi-layered Scans
Visualization of Camera Depth Input
3. Extreme Parkour on an Unstructured Terrain
Supplementary Materials
Domain Randomization Parameters
Network Architecture
PPO Hyper-parameters
Reward Function Weights