Visual-Locomotion:

Learning to Walk on Complex Terrains with Vision

Wenhao Yu1, Deepali Jain1, Alejandro Escontrela1,2, Atil Iscen1, Peng Xu1, Erwin Coumans1, Sehoon Ha1,3, Jie Tan1, Tingnan Zhang1

1 Robotics at Google 2 University of California, Berkeley 3 Georgia Institute of Technology

Abstract

Vision is one of the most important perception modalities for legged robots to safely and efficiently navigate uneven terrains, such as stairs and stepping stones. However, training robots to effectively understand high-dimensional visual input for locomotion is a challenging problem. In this work, we propose a framework to train a vision-based locomotion controller which enables a quadrupedal robot to traverse uneven environments. The key idea is to introduce a hierarchical structure with a high-level vision policy and a low-level motion controller. The high-level vision policy takes as inputs the perceived vision signals as well as robot states and outputs the desired footholds and base movement of the robot. These are then realized by the low level motion controller composed of a position controller for swing legs and a MPC-based torque controller for stance legs. We train the vision policy using Deep Reinforcement Learning and demonstrate our approach on a variety of uneven environments such as randomly placed stepping stones, quincuncial piles, stairs, and moving platforms. We also validate our method on a real robot to walk over a series of gaps and climbing up a platform.


[Paper link]

Our visual-locomotion algorithm can successfully navigate a variety of terrains in simulation:

Pacing on randomly generated step-stones

Trotting on real-world size stairs (~18cm high)

Walking over quincuncial piles

Trotting over quincuncial piles

Trotting over uneven terrains

Trotting on moving platforms

We also developed a sim-to-real transfer procedure to enable deployment of our policies on a real robot:

Laikago trotting over random stepstones in real-world.

Laikago walking up one stair in real-world.