Sampling from posterior distributions is at the heart of Bayesian computation. I am particularly interested in methods that combine the flexibility of particle-based techniques with the structure and accuracy of deterministic high-order methods. My work spans several complementary directions:
Particle Filters and Particle Flow Methods: I study particle-based methods for sequential inference, including filtering algorithms that use continuous-time flows to transport particles in the state space. These approaches offer a promising alternative to traditional resampling-based filters, particularly in high-dimensional or nonlinear settings. [arxiv]
Sampling via Transport Maps: A central theme in my work is sampling through transport maps, especially those defined via ODE flows. These maps offer an elegant way to deterministically transform reference distributions (e.g., Gaussians) into complex targets, and provide a bridge between sampling, optimization, and dynamical systems. [arxiv]
Importance Sampling and Error Correction: I investigate how importance sampling can be used to correct for biases introduced by approximate transports or low-fidelity models. This includes designing adaptive and hybrid schemes that balance efficiency and robustness.
Markov Chain Importance Sampling (MCIS): Together with Ingmar Schuster, I have developed a method for recycling the rejected samples of MCMC using importance sampling. This approach—Markov Chain Importance Sampling (MCIS)—allows one to make use of all proposals in a Markov chain by assigning them appropriate weights, significantly improving statistical efficiency. [JCGS] [arxiv]
High-Order Methods and Cubature Rules: I am also interested in the interplay between sampling and high-order numerical integration, particularly using quasi-Monte Carlo (QMC) methods and sparse grids. A key question is how such structured point sets can be effectively transported to match complex target distributions, and how importance reweighting can be used to maintain high accuracy. [arxiv]
Another line of my research focuses on the definition, existence, and stability of modes of probability measures, particularly in infinite-dimensional or non-smooth settings. In the Bayesian framework, the mode of the posterior is known as the maximum a posteriori (MAP) estimator. However, defining what constitutes a "mode" is subtle—especially beyond finite-dimensional, absolutely continuous cases. Together with Tim Sullivan, Hefin Lambley, Philipp Wacker, Han Cheng Lie and Birzhan Ayanbayev, I study the mathematical foundations of modes, including their
rigorous definition [arxiv],
existence criteria [Inverse Problems] [arxiv],
stability under perturbations [Inverse Problems] [arxiv] (part 1), [Inverse Problems] [arxiv] (part 2).
This work contributes to a deeper understanding of MAP estimation in complex Bayesian models and provides tools for comparing and classifying different mode concepts.
I am also interested in the intersection of kernel methods with Bayesian computation and sampling. In particular, I explore how kernel embeddings of distributions can be integrated into transport and importance sampling frameworks.
Together with Tim Sullivan, Björn Sprungk and Ingmar Schuster, I developed a rigorous theory of conditional mean embeddings [SIMODS] [arxiv] and linear conditional expectations [Bernoulli] [arxiv]. This theory bridges the gap between reproducing kernel Hilbert space (RKHS) methods and Bayesian inference, offering a principled foundation for data-driven inference and functional regression.
Together with Vesa Kaarnioja, Claudia Schillings and Yuya Suzuki, I am also working on combining kernel methods with quasi-Monte Carlo (QMC), with a focus on kernel-based cubature for high-dimensional integration. [arxiv]
I am particularly interested in the construction of so-called uninformative priors. While every prior distribution encodes some information about the parameter, and a truly non-informative prior is unlikely to exist (see this StackExchange post), one can still aim for objective priors that possess desirable properties—such as invariance under reparametrization. The theoretical study of such priors began with Jeffreys' prior (Jeffreys, 1945) and has since been extended through information-theoretic approaches, most notably the reference prior introduced by Berger and Bernardo.
In joint work with Alexander Sikorski, Susanna Röblitz and Christof Schütte, we generalize these ideas to the empirical Bayes setting, developing analogues of objective priors for data-driven scenarios. [SJOS] [arxiv]