Theoretical Foundations and Applications

of Deep Generative Models


Overview

In recent years there has been resurgence of interest in deep generative models (DGMs). The emerging approaches, such as VAEs, GANs, GMMNs, auto-regressive neural networks, and many of their variants and extensions, have led to impressive results in a myriad of applications, such as image generation and manipulation, text generation, disentangled representation learning, and semi-supervised learning. In fact, research on DGMs has a long history. Early forms of such models date back to works on hierarchical Bayesian models and neural network models such as Helmholtz machines, originally studied in the context of unsupervised learning and latent space modeling. Despite recent advances, many foundational aspects of deep generative models are relatively unexplored, including theoretical properties, effective algorithms for learning and inference, and deployment in real-world applications. This workshop aims to be a platform for exchanging ideas regarding both theoretical foundations and applications of DGMs, identifying key challenges in the field, and establishing the most exciting future directions for research into DGMs.


News

Invited Speakers

Important Dates

  • Paper submissions due: 23:59 UTC, May 31, 2018
  • Acceptance notification: June 14, 2018
  • Camera-ready paper submission due: July 10, 2018
  • Workshop: July 14 & morning of July 15, 2018

[Early review and decisions may be made for early submissions upon request]