PLAS: Latent Action Space for Offline Reinforcement Learning

Robotics Institute, Carnegie Mellon University

Conference on Robot Learning (CoRL) 2020

Also presented at NeurIPS 2020 Offline Reinforcement Learning Workshop

Highlights

  • Proposed to learn the policy in the latent action space that naturally avoids out-of-distribution actions and is not restricted by the behavior policy distribution

  • A separation of in-distribution vs. out-of-distribution generalization in Offline RL

  • Evaluated on both simulated and real-robot experiments

Abstract

The goal of offline reinforcement learning is to learn a policy from a fixed dataset, without further interactions with the environment. This setting will be an increasingly more important paradigm for real-world applications of reinforcement learning such as robotics, in which data collection is slow and potentially dangerous. Existing off-policy algorithms have limited performance on static datasets due to extrapolation errors from out-of-distribution actions. This leads to the challenge of constraining the policy to select actions within the support of the dataset during training. We propose to simply learn the Policy in the Latent Action Space (PLAS) such that this requirement is naturally satisfied. We evaluate our method on continuous control benchmarks in simulation and a deformable object manipulation task with a physical robot. We demonstrate that our method provides competitive performance consistently across various continuous control tasks and different types of datasets, outperforming existing offline reinforcement learning methods with explicit constraints.

Videos

MuJoCo.mp4

MuJoCo Locomotion Tasks

Franka.mp4

Franka Kitchen Tasks

Cloth_Sliding.mp4

Cloth Sliding Task on Sawyer

hand.mp4

Adroit Hand Tasks

Acknowledgement

This material is based upon work supported by the United States Air Force and DARPA under Contract No. FA8750-18-C-0092, LG Electronics and the National Science Foundation under Grant No. IIS-1849154. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of United States Air Force and DARPA and the National Science Foundation.