FitVid:
Overfitting in Pixel-Level Video Prediction
Mohammad Babaeizadeh¹, Mohammad Taghi Saffar¹, Suraj Nair²
Sergey Levine¹, Chelsea Finn¹, Dumitru Erhan¹
¹ Google Brain ² Stanford University
Abstract
An agent that is capable of predicting what happens next can perform a variety of tasks through planning with no additional training. Furthermore, such an agent can internally represent the complex dynamics of the real-world and therefore can acquire a representation useful for a variety of visual perception tasks. This makes predicting the future frames of a video, conditioned on the observed past and potentially future actions, an interesting task which remains exceptionally challenging despite many recent advances. Existing video prediction models have shown promising results on simple narrow benchmarks but they generate low quality predictions on real-life datasets with more complicated dynamics or broader domain.
There is a growing body of evidence that underfitting on the training data is one of the primary causes for the low quality predictions. In this paper, we argue that the inefficient use of parameters in the current video models is the main reason for underfitting. Therefore, we introduce a new architecture, named FitVid, which is capable of severe overfitting on the common benchmarks while having similar parameter count as the current state-of-the-art models. We analyze the consequences of overfitting, illustrating how it can produce unexpected outcomes such as generating high quality output by repeating the training data, and how it can be mitigated using existing image augmentation techniques. As a result, FitVid outperforms the current state-of-the-art models across four different video prediction benchmarks on four different metrics.
Method
FitVid is a new architecture for conditional variational video prediction. It has ~300 million parameters and can be trained with minimal training tricks.
Conditional Variational Video Prediction
FitVid Architecture
Video SAMPLES
The red border indicates the context frame and the green border means the frame was predicted.
In each pair, the left image is the ground truth and the right image is the predicted video.
Long (300 Frames) Predictions
Robonet (Action-conditioned)
Human3.6M
BAIR Robot Pushing (Action-free)
Matched Training Videos
Left is the test ground truth.
Middle is the predicted video.
Right is the closest training video to the prediction.