Robust Recovery Motion Control for Quadrupedal Robots via Learned Terrain Imagination

I Made Aswin Nahrendra, Minho Oh, Byeongho Yu, Hyungtae Lim, and Hyun Myung

Urban Robotics Laboratory

School of Electrical Engineering, KAIST

Abstract

Quadrupedal robots have emerged as a cutting--edge platform for assisting humans, finding applications in tasks related to inspection and exploration in remote areas. Nevertheless, their floating base structure renders them susceptible to failure in cluttered environments, where manual recovery by a human operator may not always be feasible. Several recent studies have presented recovery controllers employing deep reinforcement learning algorithms. However, these controllers are not specifically designed to operate effectively in cluttered environments, such as stairs and slopes, which restricts their applicability. In this study, we propose a robust all-terrain recovery policy to facilitate rapid and secure recovery in cluttered environments. We substantiate the superiority of our proposed approach through simulations and real-world outdoor tests encompassing various terrain types.

DreamRiser in QRC ICRA 2023

The robot faced difficulties when climbing on inclined sponge with hurdles and eventually fall. Despite out of the training distribution, DreamRiser's policy enables the robot to swiftly recover its pose and continue locomotion using DreamWaQ's policy.

A1 on sponge

Go1 with payload on sponge

A1 on outdoor

Go1 with payload on outdoor

A1 on plastic boxes

A1 on bumps

Related Work

DreamWaQ: Learning Robust Quadrupedal Locomotion With Implicit Terrain Imagination via Deep Reinforcement Learning

I Made Aswin Nahrendra, Byeongho Yu, and Hyun Myung

IEEE International Conference on Robotics and Automation (ICRA) 2023