Yingzhen Li


Meta learning for stochastic gradient MCMC

Abstract

SG-MCMC has become increasingly popular for simulating posterior samples in large-scale Bayesian modeling. However, existing SG-MCMC schemes are not tailored to any specific probabilistic model, even a simple modification of the underlying dynamical system requires significant physical intuition. In this talk I will present our first attempt towards meta-learning an SG-MCMC sampler. The learned sampler generalizes HMC with state-dependent drift and diffusion, enabling fast traversal and efficient exploration of energy landscapes.

This talk is based on a joint work with Wenbo Gong and Jose Miguel Hernandez-Lobato.

Bio

Yingzhen Li recently finished her PhD at the University of Cambridge. Her PhD thesis focuses on approximate inference. She is also interested in Bayesian deep learning in general, including (deep) probabilistic model design and uncertainty quantification in downstream tasks.