Emergent Real-World Robotic Skills via Unsupervised Reinforcement Learning

Abstract

Reinforcement learning provides a general framework for learning robotic skills while minimizing engineering effort. However, most reinforcement learning algorithms assume that a well-designed reward function is provided, and learn a single behavior for that single reward function. Such reward functions can be difficult to design in practice. Can we instead develop efficient reinforcement learning methods that acquire diverse skills without any reward function, and then repurpose these skills for downstream tasks? In this paper, we demonstrate that a recently proposed unsupervised skill discovery algorithm can be extended into an efficient off-policy method, making it suitable for performing unsupervised reinforcement learning in the real world. Firstly, we show that our proposed algorithm provides substantial improvement in learning efficiency, making reward-free real-world training feasible. Secondly, we move beyond the simulation environments and evaluate the algorithm on real physical hardware. On quadrupeds, we observe that locomotion skills with diverse gaits and different orientations emerge without any rewards or demonstrations. We also demonstrate that the learned skills can be composed using model predictive control for goal-oriented navigation, without any additional training.

Overview

We develop an asynchronous off-policy learning algorithm of Dynamics-Aware Discovery of Skills, coined off-DADS. The gains in sample efficiency enables us to deploy the algorithm in real-world -- we present results for locomotion skills learnt by a quadruped without any rewards or demonstrations. A brief overview of the algorithm, the training routine and the evolution of skills learned by the quadruped is in the video below.

Emergent Skills

We show a sample of skills learned using off-DADS in this section. The first row shows different skills learned by the quadruped through real-world training -- skills show different directionality and gaits acquired autonomously. The second row shows the simulation results for a three-armed manipulation setup, where the valve can be rotated freely. Again, we are able to acquire skills to rotate the object in different directions without any extrinsic rewards.

Goal Navigation

One of the benefits of the DADS formulation is that the learned skills can be repurposed to solve downstream tasks using model-based control, potentially in zero-shot. We demonstrate for real-world goal navigation, where we model-based composition the skills learned by the quadruped to navigate to different goals.

Valve Turn

We demonstrate that the skills can be repurposed for solving downstream tasks in the manipulation domain as well. We can reuse the skills learned for rotating the valve, to turn the valve starting from an arbitrary position to an arbitrary goal (marked by the green rod). We again use model-based control to compose the learned skills, which in this case allows for a zero-shot solution to the task.

Citation

@article{sharma2020emergent,
    title={Emergent Real-World Robotic Skills via Unsupervised Off-Policy Reinforcement Learning},
    author={Sharma, Archit and Ahn, Michael and Levine, Sergey and Kumar, Vikash and Hausman, Karol and Gu, Shixiang},
    journal={arXiv preprint arXiv:2004.12974},
    year={2020}
}