Upcoming Seminar Presentations
All seminars are on Tuesdays [ 8:30 am PT ] = [ 11:30 am ET ] = [ 3:30 pm London ] = [ 4:30 pm Paris ] = [ 11:30 pm Beijing ]
Subscribe to our mailing list and calendar for up-to-date schedule!
Tuesday, March 3, 2026
Speaker: Pierre Del Moral (INRIA Bordeaux) [Zoom Link]
Title: On the Kantorovich contraction of Markov semigroups
Abstract: We present a novel operator theoretic framework to study the contraction properties of Markov semigroups with respect to a general class of Kantorovich semi-distances, which notably includes Wasserstein distances. This rather simple contraction cost framework combines standard Lyapunov techniques with local contraction conditions. Our results can be applied to both discrete time and continuous time Markov semigroups, and we illustrate their wide applicability in the context of (i) Markov transitions on models with boundary states, including bounded domains with entrance boundaries, (ii) operator products of a Markov kernel and its adjoint, including two-block-type Gibbs samplers, (iii) iterated random functions and (iv) diffusion models, including overdampted Langevin diffusion with convex at infinity potentials. Joint work with M. Gerber (Bristol Univ.)
Links: YouTube, Slides
Tuesday, March 10, 2026
Speaker: Luhuan Wu (Flatiron, Johns Hopkins University) [Zoom Link]
Title: Reverse Diffusion Sequential Monte Carlo Samplers
Time: [ 8:30 am PT ] = [ 11:30 am ET ] = [ 3:30 pm London ] = [ 4:30 pm Paris ] = [ 0:30 am Beijing + 1d]
Abstract: Diffusion models have emerged as a powerful paradigm for generative modeling. In this talk, we explore their use as annealing paths for sampling from unnormalized target distributions. Building on prior work, we first present a unifying framework that leverages Monte Carlo methods to estimate score functions and simulate diffusion-based sampling trajectories. However, such approaches can suffer from accumulated bias due to time discretization and imperfect score estimation.
To address these challenges, we introduce a principled Sequential Monte Carlo (SMC) framework that formalizes diffusion-based samplers as proposal mechanisms while systematically correcting their biases. The key idea is to construct informative intermediate target distributions that progressively guide particles toward the final distribution of interest. Although the ideal targets are intractable, we derive exact approximations using quantities already available from the score-based proposal, requiring no extra inference overhead. The resulting method, Reverse Diffusion Sequential Monte Carlo, enables consistent sampling and unbiased estimation of the target normalization constant. We demonstrate our method on a range of synthetic targets and Bayesian regression tasks.
Tuesday, March 17, 2026
Speaker: Anna Korba (ENSAE/ CREST) [Zoom Link]
Title: Variational Inference with Mixtures of Isotropic Gaussians
Time: [ 9:30 am PT ] = [ 12:30 pm ET ] = [ 4:30 pm London ] = [ 5:30 pm Paris ] = [ 0:30 am Beijing + 1d]
Abstract: Variational inference (VI) is a popular approach in Bayesian inference, that looks for the best approximation of the posterior distribution within a parametric family, minimizing a loss that is typically the (reverse) Kullback-Leibler (KL) divergence. In this paper, we focus on the following parametric family: mixtures of isotropic Gaussians (i.e., with diagonal covariance matrices proportional to the identity) and uniform weights. We develop a variational framework and provide efficient algorithms suited for this family. In contrast with mixtures of Gaussian with generic covariance matrices, this choice presents a balance between accurate approximations of multimodal Bayesian posteriors, while being memory and computationally efficient. Our algorithms implement gradient descent on the location of the mixture components (the modes of the Gaussians), and either (an entropic) Mirror or Bures descent on their variance parameters. We illustrate the performance of our algorithms on numerical experiments. This is a joint work with Marguerite Petit-Talamon and Marc Lambert, that was presented at Neurips 2025.
Tuesday, March 24, 2026
Speaker: Giacomo Zanella (Bocconi University) [Zoom Link]
Title: Error Bounds and Optimal Schedules for Masked Diffusion models
Time: [ 8:30 am PT ] = [ 11:30 pm ET ] = [ 3:30 pm London ] = [ 4:30 pm Paris ] = [ 11:30 pm Beijing]
Abstract: Masked Diffusion Models are popular generative models for discrete data, which exploit conditional independence approximations to reduce the computational cost of popular Auto-Regressive Models. We study the resulting computation-vs-accuracy trade-off, providing general error bounds (in relative entropy) that depend only on the average number of tokens generated per iteration and are independent of the data dimensionality (i.e. sequence length). We then investigate the gains obtained by using non-constant schedule sizes and identify the optimal schedule as a function of the so-called information profile of the data distribution. The talk is based on joint work with Hugo Lavenant, available at https://arxiv.org/abs/2510.25544.
Links: Paper