Visual Representation Learning with Stochastic Frame Prediction

Huiwon Jang1, Dongyoung Kim1, Junsu Kim1, Jinwoo Shin1, Pieter Abbeel2, Younggyo Seo1,3 

1KAIST  2UC Berkeley  3Dyson Robot Learning Lab

TL; DR.

Learning stochastic frame prediction model from videos enhances the image representation to capture temporal information between frames.

Abstract

Self-supervised learning of image representations by predicting future frames is a promising direction but still remains a challenge. This is because of the under-determined nature of frame prediction; multiple potential futures can arise from a single current frame. To tackle this challenge, in this paper, we revisit the idea of stochastic video generation that learns to capture uncertainty in frame prediction and explore its effectiveness for representation learning. Specifically, we design a framework that trains a stochastic frame prediction model to learn temporal information between frames. Moreover, to learn dense information within each frame, we introduce an auxiliary masked image modeling objective along with a shared decoder architecture. We find this architecture allows for combining both objectives in a synergistic and compute-efficient manner. We demonstrate the effectiveness of our framework on a variety of tasks from video label propagation and vision-based robot learning domains, such as video segmentation, pose tracking, vision-based robotic locomotion, and manipulation tasks.

Representation Learning with Stochastic Frame Prediction (RSP)

We present Representation learning with Stochastic frame Prediction (RSP), a framework that learns visual representation from videos via stochastic future frame prediction. The key idea is to learn image representations that capture temporal information between frames by learning a stochastic frame prediction model with videos. To this end, we revisit the idea of stochastic video generation (SVG; Denton et al., 2018) that trains a time-dependent prior over future frames to capture uncertainty in frame prediction. Specifically, our key contribution lies in exploring various design choices and incorporating recent advances in self-supervised learning into the video generation model, to re-configure it for representation learning. We find that RSP allows for learning strong image representations from complex real-world videos when compared to deterministic prediction objectives. 

To learn dense information within each frame, we further introduce an auxiliary masked autoencoding objective, along with a shared decoder architecture that enables us to incorporate the auxiliary objective in a synergistic manner.

Main Experiments: Vision-based Robot Learning

(a) Examples of CortexBench, RLBench, and FrankaKitchen

(b) Aggregate results on vision-based robot learning (Interquartile mean; IQM)

Main Experiments: Video Label Propagation

Ablation study and Analysis: Design Choice

TBD: ablation study table