Offline Reinforcement Learning from Images with Latent Space Models

ABSTRACT


Offline reinforcement learning (RL) refers to the problem of learning policies from a static dataset of environment interactions. Offline RL enables extensive use and re-use of historical datasets, while also alleviating safety concerns associated with online exploration, thereby expanding the real-world applicability of RL. Most prior work in offline RL has focused on tasks with compact state representations. However, the ability to learn directly from rich observation spaces like images is critical for real-world applications such as robotics. In this work, we build on recent advances in model-based algorithms for offline RL, and extend them to high-dimensional visual observation spaces. Model-based offline RL algorithms have achieved state of the art results in state based tasks and have strong theoretical guarantees. However, they rely crucially on the ability to quantify uncertainty in the model predictions, which is particularly challenging with image observations. To overcome this challenge, we propose to learn a latent-state dynamics model, and represent the uncertainty in the latent space. Our approach is both tractable in practice and corresponds to maximizing a lower bound of the ELBO in the unknown POMDP. In experiments on a range of challenging image-based locomotion and manipulation tasks, we find that our algorithm significantly outperforms previous offline model-free RL methods as well as state-of-the-art online visual model-based RL methods. Moreover, we also find that our approach excels on an image-based drawer closing task on a real robot using a pre-existing dataset.

SIMULATION RESULTS

REAL ROBOT EXPERIMENT

LOMPO evaluation on the real robot task, trained on the dataset from Chen et. al. (2020). LOMPO is the only method that solves the task with 76% success rate.

EXAMPLE SEQUENCES AND SAMPLES FROM THE MODEL


Ground Truth Sequence



Posterior Samples




Conditional Prior Samples

Samples from our variational ensemble model. The conditional prior samples shows model rollouts conditioned on the first 5 observations and the action sequence.

ACKNOWLEDGEMENTS

We want to thank Suraj Nair for sharing the BEE dataset with us and his help with setting up the Panda drawer environment. This work was supported in part by ONR grant N00014-20-1-2675 and Intel Corporation. CF is a CIFAR Fellow in the Learning in Machines and Brains program. Aravind Rajeswaran was supported by a JP Morgan PhD Fellowship (2020).