2014-04-10: Randal Douc and Amandine Schreck

Post date: 11-Mar-2014 13:17:18

* 15h, Randal Douc, Télécom SudParis, will talk about:

Uniform ergodicity of the Particle Gibbs sampler

(joint work with F. Lindsten, and E. Moulines)

The particle Gibbs (PG) sampler is a systematic way of using a particle filter within

Markov chain Monte Carlo (MCMC). This results in an off-the-shelf Markov kernel on the

space of state trajectories, which can be used to simulate from the full joint smoothing

distribution for a state space model in an MCMC scheme. We show that the PG Markov

kernel is uniformly ergodic under rather general assumptions, that we will carefully review

and discuss. In particular, we provide an explicit rate of convergence which reveals that:

(i) for fixed number of data points, the convergence rate can be made arbitrarily good

by increasing the number of particles, and (ii) under general mixing assumptions, the

convergence rate can be kept constant by increasing the number of particles superlinearly

with the number of observations. We illustrate the applicability of our result by studying

in detail two common state space models with non-compact state spaces.

* 16h15, Amandine Schreck, Télécom Paristech, will talk about:

An adaptive version of the equi-energy sampler

Markov chain Monte Carlo (MCMC) methods allow to sample a target distribution known up to a multiplicative constant. A canonical example of such methods is the Metropolis-Hastings (MH) sampler, which samples points from a proposal distribution, and subjects them to an acceptance-rejection step. But it is known that the efficiency of classical MH-based samplers depends upon the choice of the proposal distribution. For example, when sampling a multimodal distribution, a MH sampler without a proper proposal distribution will tend to be trapped in one of the modes.

The Equi-Energy sampler proposed by Kou, Zhou and Wong in 2006 is an interacting MCMC sampler especially designed for multimodal distributions. This algorithm is based on the idea that sampling a tempered version of a multimodal distribution would allow better mixing properties between the modes. It runs therefore several chains at different temperatures in parallel, and allow sometimes lower-tempered chains to jump to a past point from a higher-tempered chain. This jump is as usual associated with an acceptance-rejection step, so that the algorithm has the desired asymptotic properties. As the acceptance probability of this jump can be very low if the temperatures of the two considered chains and the energy of the current point and the proposed point are too different, a selection step is added in the algorithm: given energy rings, only jumps to a past point of the higher-tempered process being in the same energy ring as the current point of the process of interest are allowed.

A major drawback of this algorithm is that it depends on many design parameters and thus requires a significant tuning effort. In this work, we introduce an Adaptive Equi-Energy (AEE) sampler which automates the choice of the selection mecanism when jumping onto a state of the higher-tempered chain. We propose two different ways of defining the rings: one using empirical quantiles, and one using a stochastic approximation algorithm. We prove the ergodicity and a strong law of large numbers for AEE, and for the original Equi-Energy sampler as well. Finally, we provide some illustrations for AEE.