Greedy Hierarchical Variational Autoencoders for Large-Scale Video Prediction


Bohan Wu, Suraj Nair, Roberto Martin-Martin, Li Fei-Fei*, Chelsea Finn*


CVPR 2021

A video prediction model that generalizes to diverse scenes would enable intelligent agents such as robots to perform a variety of tasks via planning with the model. However, while existing video prediction models have produced promising results on small datasets, they suffer from severe underfitting when trained on large and diverse datasets. To address this underfitting challenge, we first observe that the ability to train larger video prediction models is often bottlenecked by the memory constraints of GPUs or TPUs. On the other hand, while deep hierarchical latent variable models can better capture the inherent uncertainty of the future, end-to-end optimization of such models is notably difficult. Our key insight is that greedy and modular optimization of hierarchical variational autoencoders (VAEs) can simultaneously address both the memory constraints and the optimization challenges of large-scale video prediction. We introduce Greedy Hierarchical Variational Autoencoders (GHVAEs), a method that learns high-fidelity video predictions by greedily training each level of a hierarchical VAE. In comparison to state-of-the-art models, GHVAEs provide significant gains in prediction performance and memory efficiency on three video datasets, a 35% higher success rate in GHVAEs-based real-robot tasks, and can improve performance monotonically by simply adding more modules.

*This work was supported in part by ONR grant N00014-20-1-2675. SN was supported by an NSF graduate research fellowship.

Project Video

GIF Visualizations