Search this site
Embedded Files

Don't Blame the ELBO!

A Linear VAE Perspective on Posterior Collapse

Abstract: Posterior collapse in Variational Autoencoders (VAEs) arises when the variational posterior distribution closely matches the prior for a subset of latent variables. This paper presents a simple and intuitive explanation for posterior collapse through the analysis of linear VAEs and their direct correspondence with Probabilistic PCA (pPCA). We explain how posterior collapse may occur in pPCA due to local maxima in the log marginal likelihood. Unexpectedly, we prove that the ELBO objective for the linear VAE does not introduce additional spurious local maxima relative to log marginal likelihood. We show further that training a linear VAE with exact variational inference recovers an identifiable global maximum corresponding to the principal component directions. Empirically, we find that our linear analysis is predictive even for high-capacity, non-linear VAEs and helps explain the relationship between the observation noise, local maxima, and posterior collapse in deep Gaussian VAEs.

James Lucas George Tucker Roger Grosse Mohammad Norouzi

Slides used for the video above.

Don't blame the ELBO! Colab notebook

This colab provides some simple code to train and evaluate a linear VAE using tensorflow, using both the analytic and stochastic forms of the ELBO computation.

NeurIPS 2019 Poster

Don't Blame the ELBO! A Linear VAE Perspective on Posterior Collapse; James Lucas, George Tucker, Roger Grosse, Mohammad Norouzi; NeurIPS 2019
Google Sites
Report abuse
Page details
Page updated
Google Sites
Report abuse