Qiang Liu

Wild Variational Inference with Expressive Variational Families

Abstract

Variational inference (VI) provides a powerful tool for reasoning with highly complex probabilistic models in machine learning. The basic idea of VI is to approximate complex target distributions with simpler distributions found by minimizing the KL divergence within some predefined parametric families. A key limitation of the typical VI techniques, however, is that they require the variational family to be simple enough to have tractable likelihood functions, which excludes a broad range of flexible, expressive families such as these defined via implicit models. In this talk, we will discuss a general framework for (wild) variational inference that works for much more expressive, implicitly defined variational families with intractable likelihood functions. Our key idea is to first lift the optimization problem into the infinite dimensional space, solved using nonparametric particle methods, and then project the update back to the finite dimensional parameter space that we want to optimize with. Our framework is highly general and allows us to leverage any existing particle methods as the inference engine for wild variational inference, including MCMC and Stein variational gradient methods.

Biography

Qiang Liu is an assistant professor of computer science at Dartmouth college. His research interests are in statistical machine learning, Bayesian inference, probabilistic graphical models and crowdsourcing. He received his Ph.D from University of California at Irvine, followed with a brief postdoc at MIT CSAIL. He is an action editor of Journal of Machine Learning Research (JMLR).