Abstract: Variational inference (VI) is a popular alternative to Markov chain Monte Carlo (MCMC) for approximating high-dimensional target distributions. At its core, VI approximates a high-dimensional target distribution—typically specified via an unnormalized density—by a simpler variational family. Despite its empirical successes, the theoretical properties of variational inference have only begun to be understood recently. In this talk, I will discuss recent developments in the theory of variational inference from an optimal transport perspective. In the first part, I will present our recent results on the stability and instability of mean-field variational inference (MFVI). Our main insight is simple: when the target distribution is strongly log-concave, MFVI is quantitatively stable under perturbations of the target, whereas even for simple non–log-concave targets such as a mixture of two Gaussians, MFVI provably suffers from mode collapse. The consequencs of our results are discussed, including guarantees for robust Bayesian inference and a quantitative Bernstein–von Mises theorem. In the second part of the talk, I will present our work on the statistical and computational theory for a class of structured variational inference where the variational family consists of all star-shaped distributions. We establish quantitative approximation guarantees and provide a polynomial-time algorithm for solving the VI problem when the target distribution is strongly log-concave. We also discuss concrete examples, including generalized linear models with Gaussian likelihoods. We also discuss concrete examples including generalized linear models with Gaussian likelihoods. This talk is based on joint work with Shunan Sheng, Alberto González-Sanz, Marcel Nutz, Sinho Chewi, Binghe Zhu, and Aram Pooladian.