2014-05-15: Luke Bornn and Pierre Jacob

Post date: 07-Apr-2014 14:45:56

* Luke Bornn, Harvard University, will talk at 3pm on:

Towards the Derandomization of Markov chain Monte Carlo

In this talk, I will explore the current trend towards conducting Bayesian inference through Markov chain Monte Carlo (MCMC) algorithms which exhibit converge at a rate faster than $n^{-1/2}$ by derandomizing components of the algorithm. For instance, herded Gibbs sampling (Bornn et al., 2013) can be shown to exhibit convergence in certain settings at a $n^{-1}$ rate. These algorithms exhibit remarkable similarity to existing MCMC algorithms; as an example, herded Gibbs sampling is equivalent to the Wang-Landau algorithm with various specified tuning parameters, and with the random sampling replaced with an argmax step. We demonstrate that many such MCMC algorithms lie in a middle-ground between vanilla Gibbs samplers and deterministic algorithms by using clever auxiliary variable schemes to induce both negatively correlated samples as well as force exploration of the parameter space. Based on this observation, we propose several new algorithms which exploit elements of both MCMC and deterministic algorithms to improve exploration and convergence.

* Pierre Jacob, Oxford University, will talk at 4.15pm on:

On exact inference and unbiased estimation

We study the existence of algorithms generating almost surely non-negative unbiased estimators of some quantities of practical interest. Our work is motivated by recent "exact approximation" methods to compute integrals with respect to any target distribution up to any arbitrary precision. These techniques can be implemented as long as non-negative unbiased estimators of the target density evaluations are available. In particular we show that given a non-constant function f from ℝ to ℝ+ and real-valued unbiased estimators {Xn} of λ in ℝ, there is no algorithm yielding almost surely non-negative unbiased estimators of f(λ). Even if {Xn} is itself a.s. non-negative, then there is no algorithm yielding a.s. non-negative unbiased estimators of f(λ) if f is not increasing. However if the support of {Xn} is a known interval of ℝ, then such algorithms exist as long as the function f satisfies some polynomial bound away from zero. The results are discussed in the light of several typical situations corresponding to different functions f, such as inference in large dataset, inference in doubly intractable models, and inference with reference priors.