Upcoming Seminar Presentations
All seminars are on Tuesdays [ 8:30 am PT ] = [ 11:30 am ET ] = [ 4:30 pm London ] = [ 5:30 pm Paris ] = [ 11:30 pm Beijing + 1d]
Subscribe to our mailing list and calendar for up-to-date schedule!
However, this generality hinges on being able to approximate expectations with respect to an arbitrary measure. Can we develop generic sampling methods in such an unstructured context? Surprisingly, practical methodologies are indeed possible. I will describe some of our work in the area with a focus on recent developments based on regenerative MCMC, particle methods, and non-reversibility. My group is also working on making these complex Monte Carlo methods easy to use: check out https://pigeons.run/dev/ , a package that allows users to leverage clusters of 1000s of nodes to speed up difficult Monte Carlo problems without requiring knowledge of distributed algorithms.
Tuesday, October 7, 2025
Speaker: Michael Albergo (Harvard) [Zoom Link]
Title: Non-equilibrium transport and tilt matching for sampling
Abstract: We propose a simple, scalable algorithm for using stochastic interpolants to perform sampling from unnormalized densities and for fine-tuning generative models. The approach, Tilt Matching, arises from a dynamical equation relating the velocity field for a flow matching method to the velocity field that would target the same distribution tilted by a reward. As such, the new velocity inherits the regularity of stochastic interpolant transport plans while also being the minimizer of an objective function with strictly lower variance than flow matching itself. The update to the velocity field that emerges from this simple regression problem can be interpreted as the sum of all joint cumulants of the stochastic interpolant and copies of the reward, and to first order is their covariance. We define two versions of the method, Explicit and Implicit Tilt Matching. The algorithms do not require any access to gradients of the reward or backpropagating through trajectories of the flow or diffusion. We empirically verify that the approach is efficient, unbiased, and highly scalable, providing state-of-the-art results on sampling under Lennard-Jones potentials and is competitive on fine-tuning Stable Diffusion, without requiring reward multipliers. It can also be straightforwardly applied to tilting few-step flow map models.
Links: YouTube