Reinforcement Learning with Videos:

Combining Offline Observations with Interaction

Karl Schmeckpeper, Oleh Rybkin, Kostas Daniilidis, Sergey Levine, and Chelsea Finn

Conference on Robotic Learning, 2020

Abstract: Reinforcement learning is a powerful framework for robots to acquire skills from experience, but often requires a substantial amount of online data collection. As a result, it is difficult to collect sufficiently diverse experiences that are needed for robots to generalize broadly. Videos of humans, on the other hand, are a readily available source of broad and interesting experiences. In this paper, we consider the question: can we perform reinforcement learning directly on experience collected by humans? This problem is particularly difficult, as such videos are not annotated with actions and exhibit substantial visual domain shift relative to the robot's embodiment. To address these challenges, we propose a framework for reinforcement learning with videos (RLV). RLV learns a policy and value function using experience collected by humans in combination with data collected by robots. In our experiments, we find that RLV is able to leverage such videos to learn challenging vision-based skills with less than half as many samples as RL methods that learn from scratch.

Reinforcement learning with videos. We study the setting where observational data is available, in the form of videos (top left). Our method can leverage such data to improve reinforcement learning by adding the videos to the replay buffer and directly performing RL on the observational data, while overcoming the challenges of unknown actions and domain shift between observation and interaction data.

Training inverse model (left): a batch with samples (oint, aint, oint', rint) is sampled from the action-conditioned replay pool, Dint, and the observations are encoded into features hint, hint'. An inverse model is trained to predict the action aint from the features hint, hint'.

Generating actions and rewards for observation data (middle): the inverse model is used to predict the missing actions in the offline videos, âint, in the robot's action space, from features (hobs hobs') that were extracted from observations (oobs, oobs'). To obtain the missing rewards ȓobs, we label the final step in the trajectory with a large reward and other steps with a small reward.

Training domain invariant representation (right): We use adversarial domain confusion to align the features from the action-conditioned data, hint with the features from the action-free data, hobs.

Finally, we use an off-policy reinforcement learning algorithm on the resulting batch ( (hint, hobs), (aint, âint), ( hint', hobs'), (rint, ȓobs) ). By overcoming the challenges of missing actions, rewards, and the presence of domain shift, we are able to effectively use the observation data to improve performance of a reinforcement learning agent.

Qualitative Results

RLV can leverage action and reward free observational data to significantly increase the sample efficiency of training. We compare the trajectories of RLV and SAC[1] trained for the same number of environment steps. For all tasks, the agent receives only sparse rewards and pixel observations.

RLV (Ours)

RLV (Ours)

SAC [1]

SAC [1]

RLV (Ours)

RLV (Ours)

SAC [1]

SAC [1]

RLV can leverage observational data, even when the observational data came from observing humans and exhibits substantial domain shift to the agent's current environment (see videos below)

RLV (Ours)

RLV (Ours)

SAC [1]

SAC [1]

RLV (Ours)

RLV (Ours)

SAC [1]

SAC [1]

Human Dataset

We use data of a human pushing and human drawer opening as observational data during the training of RLV.

The raw frames can be found here (2GB) for the pushing and here (870MB) for drawer opening.

Our post-processed replay pools can be found here (250MB) for pushing and here (200MB) for drawer opening.

References

[1] Haarnoja, T., Zhou, A., Abbeel, P. and Levine, S., 2018. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290.