Dougal Sutherland

Evaluating and Training Implicit Generative Models with Two-Sample Tests

Abstract

Samples from implicit generative models are difficult to judge quantitatively: particularly for images, it is typically easy for humans to identify certain kinds of samples which are very unlikely under the reference distribution, but very difficult for humans to identify when modes are missing, or when types are merely under- or over-represented. This talk will overview different approaches towards evaluating the output of an implicit generative model, with a focus on identifying ways in which the model has failed. Some of these approaches also form the basis for the objective functions of GAN variants which can help avoid some of the issues of stability and mode-dropping in the original GAN.

Biography

Dougal Sutherland is a postdoc with Arthur Gretton at the Gatsby Computational Neuroscience Unit, University College London. He completed his PhD at Carnegie Mellon University in 2016, working with Jeff Schneider. His research in general focuses on the problem of identifying, testing, and learning functions of distributions.