### Program

#### 2014-06-19: Chris Holmes and Pierre Pudlo

publié le 7 avr. 2014 à 07:47 par Nicolas Chopin   [ mis à jour : 16 juin 2014 à 01:06 ]

 Changes of plan: Talks will be in Amphithéatre Darboux, inside IHP (not some other place outside, as initially announced). * at 3pm, Chris Holmes, Oxford University, will talk about: Robust statistical decisions via re-weighted Monte Carlo samplesLarge complex data sets typically demand approximate models at some level of specification. In such situations it is important for the analyst to examine the robustness of conclusions to approximate predictions. Recent developments in optimal control and econometrics have established formal methods for linear quadratic state space models (see e.g. Hansen and Sargent 2008) by considering the local-minimax outcome within an information divergence (Kullback-Leibler) neighbourhood around the approximating model. Here we show how these approaches can be extended to arbitrary probability models using Monte Carlo methods. We derive theoretical results establishing the uniqueness of the Kullback-Leibler criteria, as well as Bayesian non-parametric methods to sample from the space of probability distributions within a fixed divergence constraint.* at 4.15pm, Pierre Pudlo, Université de Montpellier 2, will talk about:ABC and machine learningSince its introduction in the late 1990’s, the ABC method has been analysed from several perspectives, from a pure practical one to a non-parametric one. We develop a new vision on how generic machine learning tools like the random forests due to Breiman (2001) can be used to run model selection in the complex models covered by ABC techniques. Our perspective radically alters the way model selection is operated as we do not compute posterior probabilities for the models under comparison, which cannot be reliably estimated, but propose instead to compute the performances of the selection method. As an aside, we argue that random forest methods can be adapted to the settings of interest, with a recommendation on sparse implementations of the random forest tree construction, using subsampling and reduced reference tables.

#### 2014-05-15: Luke Bornn and Pierre Jacob

publié le 7 avr. 2014 à 07:45 par Nicolas Chopin   [ mis à jour : 11 avr. 2014 à 02:25 ]

 * Luke Bornn, Harvard University, will talk at 3pm on:Towards the Derandomization of Markov chain Monte Carlo In this talk, I will explore the current trend towards conducting Bayesian inference through Markov chain Monte Carlo (MCMC) algorithms which exhibit converge at a rate faster than $n^{-1/2}$ by derandomizing components of the algorithm.  For instance, herded Gibbs sampling (Bornn et al., 2013) can be shown to exhibit convergence in certain settings at a $n^{-1}$ rate.  These algorithms exhibit remarkable similarity to existing MCMC algorithms; as an example, herded Gibbs sampling is equivalent to the Wang-Landau algorithm with various specified tuning parameters, and with the random sampling replaced with an argmax step. We demonstrate that many such MCMC algorithms lie in a middle-ground between vanilla Gibbs samplers and deterministic algorithms by using clever auxiliary variable schemes to induce both negatively correlated samples as well as force exploration of the parameter space.  Based on this observation, we propose several new algorithms which exploit elements of both MCMC and deterministic algorithms to improve exploration and convergence.* Pierre Jacob, Oxford University, will talk at 4.15pm on: On exact inference and unbiased estimation We study the existence of algorithms generating almost surely non-negative unbiased estimators of some quantities of practical interest. Our work is motivated by recent "exact approximation" methods to compute integrals with respect to any target distribution up to any arbitrary precision. These techniques can be implemented as long as non-negative unbiased estimators of the target density evaluations are available. In particular we show that given a non-constant function f from ℝ to ℝ+ and real-valued unbiased estimators {Xn} of λ in ℝ, there is no algorithm yielding almost surely non-negative unbiased estimators of f(λ). Even if {Xn} is itself a.s. non-negative, then there is no algorithm yielding a.s. non-negative unbiased estimators of f(λ) if f is not increasing. However if the support of {Xn} is a known interval of ℝ, then such algorithms exist as long as the function f satisfies some polynomial bound away from zero. The results are discussed in the light of several typical situations corresponding to different functions f, such as inference in large dataset, inference in doubly intractable models, and inference with reference priors.

#### 2014-04-10: Randal Douc and Amandine Schreck

publié le 11 mars 2014 à 06:17 par Nicolas Chopin   [ mis à jour : 28 mars 2014 à 10:24 ]

 * 15h, Randal Douc, Télécom SudParis, will talk about: Uniform ergodicity of the Particle Gibbs sampler (joint work with F. Lindsten, and E. Moulines) The particle Gibbs (PG) sampler is a systematic way of using a particle ﬁlter within Markov chain Monte Carlo (MCMC). This results in an oﬀ-the-shelf Markov kernel on the space of state trajectories, which can be used to simulate from the full joint smoothing distribution for a state space model in an MCMC scheme. We show that the PG Markov kernel is uniformly ergodic under rather general assumptions, that we will carefully review and discuss. In particular, we provide an explicit rate of convergence which reveals that: (i) for ﬁxed number of data points, the convergence rate can be made arbitrarily good by increasing the number of particles, and (ii) under general mixing assumptions, the convergence rate can be kept constant by increasing the number of particles superlinearly with the number of observations. We illustrate the applicability of our result by studying in detail two common state space models with non-compact state spaces.* 16h15,, Télécom Paristech, will talk about: An adaptive version of the equi-energy sampler Markov chain Monte Carlo (MCMC) methods allow to sample a target distribution known up to a multiplicative constant. A canonical example of such methods is the Metropolis-Hastings (MH) sampler, which samples points from a proposal distribution, and subjects them to an acceptance-rejection step. But it is known that the efficiency of classical MH-based samplers depends upon the choice of the proposal distribution. For example, when sampling a multimodal distribution, a MH sampler without a proper  proposal distribution will tend to be trapped in one of the modes. The Equi-Energy sampler proposed by Kou, Zhou and Wong in 2006 is an interacting MCMC sampler especially designed for multimodal distributions. This algorithm is based on the idea that sampling a tempered version of a multimodal distribution would allow better mixing properties between the modes. It runs therefore several chains at different temperatures in parallel, and allow sometimes lower-tempered chains to jump to a past point from a higher-tempered chain. This jump is as usual associated with an acceptance-rejection step, so that the algorithm has the desired asymptotic properties. As the acceptance probability of this jump can be very low if the temperatures of the two considered chains and the energy of the current point and the proposed point are too different, a selection step is added in the algorithm: given energy rings, only jumps to a past point of the higher-tempered process being in the same energy ring as the current point of the process of interest are allowed. A major drawback of this algorithm is that it depends on many design parameters and thus requires a significant tuning effort. In this work, we introduce an Adaptive Equi-Energy (AEE) sampler which automates the choice of the selection mecanism when jumping onto a state of the higher-tempered chain. We propose two different ways of defining the rings: one using empirical quantiles, and one using a stochastic approximation algorithm. We prove the ergodicity and a strong law of large numbers for AEE, and for the original Equi-Energy sampler as well. Finally, we provide some illustrations for AEE.

publié le 4 févr. 2014 à 04:49 par Nicolas Chopin   [ mis à jour : 11 mars 2014 à 10:21 ]

 * 15h, Arnaud Guyader, Université Rennes 2Simulation and Estimation of Extreme Quantiles and Extreme Probabilities(joint work with Nicolas Hengartner and Eric Matzner-Løber) Let X be a random vector with distribution μ on R^d and Phi be a mapping from R^d to R. That mapping acts as a black box, e.g., the result from some computer experiments for which no analytical  expression is available. This paper presents an efﬁcient algorithm  to estimate a tail probability given a quantile or a quantile given a tail probability. The algorithm improves upon existing multilevel splitting methods and can be analyzed using Poisson process tools that lead to exact description of the distribution of the estimated probabilities and quantiles. The performance of the algorithm is demonstrated in a problem related to digital watermarking.* 16h15, Nadia Oudjane, EDF R&DA variance reduction technique for American options and application to power plant valuation (joint work with P. Del Moral and P. HU) Mathematically, the problem of pricing a power plant can be stated in terms of a stochastic optimal control problem and can be reduced in a very specific case to the thoroughly studied optimal stopping problem or American option pricing. In this talk, we present an original variance reduction approach for Bermudan option pricing (proposed in [1]) and extend this approach to apply it for valuating a thermal power plant w.r.t. the market. Del Moral, P. HU and N. Oudjane, "Snell envelope with small probability criteria", Applied Mathematics & Optimization, Vol. 66, Issue 3, pp. 309-330, 2012

#### 2014-02-13: Didier Chauveau et Adeline Samson

publié le 24 janv. 2014 à 10:17 par Nicolas Chopin   [ mis à jour : 25 janv. 2014 à 03:46 ]

 * 15h, Didier Chauveau, Université d'OrléansSimulation Based Nearest Neighbor Entropy Estimation for (Adaptive) MCMC Evaluation (joint work with Pierre Vandekerkhove) Many recent (including adaptive) MCMC methods are associated in practice to unknown rates of convergence. We propose a simulation-based methodology to estimate MCMC efficiency, grounded on a Kullback divergence criterion requiring an estimate of the entropy of the algorithm successive densities, computed from iid simulated chains. We recently proved in Chauveau and Vandekerkhove (2012) some consistency results in MCMC setup for an entropy estimate based on Monte-Carlo integration of a kernel density estimate based on Gyorfi et al. (1989). Since this estimate requires some tuning parameters and deteriorates as dimension increases, we investigate here an alternative estimation technique based on Nearest Neighbor (NN) estimates. This approach has been initiated by Kozachenko et al. (1987) but used mostly in univariate situations until recently when entropy estimation has been considered in other fields like neuroscience. We show that in MCMC setup where moderate to large dimensions are common, this estimate seems appealing for both computational and operational considerations, and that the problem inherent to a non neglictible bias arising in high dimension can be overcome. All our algorithms for MCMC simulation and entropy estimation are implemented in an R package taking advantage of recent advances in high performance (parallel) computing.* 16h15, Adeline Samson, Université Joseph Fourier, Grenoble Estimation in the partially observed stochastic Morris-Lecar neuronal model with particle filter and stochastic approximation methods(joint work Susanne Ditlevsen, Copenhagen University) Parameter estimation in multi-dimensional diffusion models with only one coordinate observed is highly relevant in many biological applications, but a statistically difficult problem. The coordinates are coupled, i.e. the unobserved coordinate is non-autonomous. Therefore the hidden Markov model framework is degenerate, and available methods break down. We propose a sequential Monte Carlo particle filter algorithm to impute the unobserved coordinate, and then estimate parameters maximizing a pseudo-likelihood through a stochastic version of the Expectation-Maximization algorithm. An experimental data set of intracellular recordings of the membrane potential of a spinal motoneuron of a red-eared turtle is analyzed, and the performance is further evaluated in a simulation study.

#### 2014-01-16: Christophe Andrieu & Mathieu Gerber

publié le 20 déc. 2013 à 06:06 par Nicolas Chopin   [ mis à jour : 8 janv. 2014 à 00:29 ]

 * 15h, Christophe Andrieu, Bristol UniversityEstablishing some order amongst exact approximation MCMCs Abstract Exact approximation Markov chain Monte Carlo (MCMC) algorithms are a general class of algorithms for Bayesian inference in complex models. We discover a general sufficient condition which allows to order two implementations of such algorithms in terms of mean acceptance probability and asymptotic variance. The key condition is convex order between the `weight' distributions, which emerges naturally when the weight distributins stem from importance sampling approximations with different number of samples. We also discuss also the implications to the spectrum of the algorithms. We develop a general theory that allows us to order pseudo-marginal algorithms in terms of their spectral gaps and asymptotic performance.* 16h15, Mathieu Gerber, CREST-ENSAE & Université de LausanneSequential Quasi Monte CarloWe develop a new class of algorithms, SQMC (Sequential Quasi Monte Carlo), as a variant of SMC (Sequential Monte Carlo) based on low-discrepancy points. The complexity of SQMC is O(N log N) where N is the number of simulations at each iteration, and its error rate is smaller than the Monte Carlo rate O(N^{-1/2}). The only requirement to implement SQMC is the ability to write the simulation of particle x_t^n given x_{t-1}^n as a deterministic function of x_{t-1}^n and uniform variates. We show that SQMC is amenable to the same extensions as standard SMC, such as forward smoothing, backward smoothing, unbiased likelihood evaluation, and so on. In particular, SQMC may replace SMC within a PMCMC (particle Markov chain Monte Carlo) algorithm. We establish several convergence results. We provide numerical evidence in several difficult scenarios than SQMC significantly outperforms SMC in terms of approximation error.(joint work with N. Chopin)

#### 2013-11-21: Nial Friel & Jimmy Olsson

publié le 8 nov. 2013 à 01:29 par Nicolas Chopin   [ mis à jour : 14 nov. 2013 à 05:39 ]

 * 15h: Jimmy OlssonPartial ordering of inhomogeneous Markov chains with applications to Markov Chain Monte Carlo methods (joint work with Florian Maire and Randal Douc) In this talk we will discuss the asymptotic variance of sample path averages for inhomogeneous Markov chains that evolve alternatingly according to two different π-reversible Markov transition kernels. More specifically, we define a partial ordering over the pairs of π-reversible Markov kernels, which allows us to compare directly the asymptotic variances for the inhomogeneous Markov chains associated with each pair. As an important application we use this result for comparing different data-augmentation-type Metropolis Hastings algorithms. In particular, we compare some pseudo-marginal algorithms and propose a novel exact algorithm, referred to as the random refreshment algorithm, which is more efficient, in terms of asymptotic variance, than the Grouped Independence Metropolis Hastings algorithm and has a computational complexity that does not exceed that of the Monte Carlo Within Metropolis algorithm. Finally, we provide a theoretical justification of the Carlin and Chib algorithm used in model selection.* 16h10: Nial FrielConvergence of Markov chains with approximate transition kernels with application to intractable likelihood statistical models. (joint work with Pierre Alquier (UCD), Aidan Boland (UCD) and Richard Everitt (University of Reading)) Monte Carlo algorithms often aim to draw from a distribution \pi by simulating a Markov chain with transition kernel P such that \pi is invariant under P. However, there are many situations for which it is impractical or impossible to draw from the transition kernel P. For instance, this is the case with massive  datasets, where is it prohibitively expensive to calculate the likelihood and is also the case for intractable  likelihood models such as those found in spatial statistics and network analysis. A natural approach in these  cases is to replace P by an approximation \hat{P}. Using theory from the stability of Markov chains we explore a variety of situations where it is possible to quantify how 'close' the invariant distribution \hat{\pi} with transition kernel \hat{P} is to \pi. We apply these results to several examples from spatial statistics and network analysis.

#### 2013-10-17 : Sylvain Le Corff and Yohan Petetin

publié le 23 sept. 2013 à 04:41 par Robin Ryder   [ mis à jour le·1 oct. 2013 à 11:56 par Nicolas Chopin ]

 * 15 h : Sylvain Le CorffContinuous-time importance sampling for Jump diffusions This talk introduces a new algorithm to sample from continuous-time jump diffusions and to estimate expectations of functionals of such diffusions. Recently, new exact algorithms have been proposed to draw samples from finite-dimensional distributions of diffusion processes and jump diffusions without any discretization step. These algorithms are based on a rejection sampling procedure and draw skeletons at some random time steps. However, these exact methods rely on strong assumptions such as the reducibility to a unit-volatility jump diffusion using the Lamperti transform. While this assumption can be proved easily for scalar diffusions, much stronger conditions are required in the multidimensional case. In this contribution,  we introduce a new algorithm to compute unbiased estimates of expectations of functionals of jump  diffusions which can be used under weaker assumptions. This Jump Continuous Importance Sampling (JCIS) algorithm draws weighted skeletons using an importance sampling mechanism recently introduced for diffusion processes. In this case, the sampled paths are not distributed as the diffusion process but the weighted samples can be used to produce unbiased Monte Carlo estimates. The JCIS algorithm is compared to several other algorithms (Euler scheme with thinned jumps, Multilevel Monte Carlo path simulation, Jump Exact algorithm) using different models (Merton model, Sinus model, Double Jump model).* 16 h 15 : Yohan Petetin Single- and multiple-object filtering for Markov models with jumps  The objective of this communication is to propose algorithms for the single- and multiple-object filtering problems in hidden Markov models which involve a discrete process which models regime switching or jumps.  First, we focus on hidden Markov models with Markovian jumps and we recall how to use sequential Monte Carlo (MC) methods for the statistical filtering problem. More precisely, we focus on Rao-Blackwellised Particle filtering for linear and Gaussian Jump Markov State Space Systems (JMSS), for the approximation of the optimal Bayesian estimate (in the minimum mean square error sense). Next, we generalise the previous filtering problem to multiple object filtering: we now look for estimating the state vectors and the jumps associated to a random number of objects which can appear or disappear over time, in a clutter environment and in the presence of misdetections. We thus focus on the Probability Hypothesis Density filter and we discuss on the MC implementation of this filter.  Finally, the last is part is devoted to alternative estimation techniques for linear and Gaussian JMSS. Rather than approximating the estimate in the classical JMSS model, we build a class of dynamical models with jumps which share with JMSS some physical properties of interest, and in which the optimal Bayesian estimate (in the minimum mean square error sense) can be computed exactly (i.e., without numerical or MC approximations) and at a computational cost linear in the number of observations. We show that these models may provide an alternative to MC methods in the JMSS context. Bibliography ·   A. Doucet, N. J. Gordon, and V. Krishnamurthy. Particle filters for state estimation of jump Markov linear systems. IEEE Transactions on Signal Processing, 49(3), 613–24, March 2001. ·   Y. Petetin, M. Morelande and F. Desbouvries, " Marginalized particle PHD filters for multiple object Bayesian filtering", IEEE Transactions on Aerospace and Electronic Systems, Accepted for publication, 2013. ·   Y. Petetin and F. Desbouvries, « Un nouvel algorithme de filtrage dans les modèles de Markov à saut linéaires et Gaussiens », Actes du 24ème colloque Gretsi, Brest, France, september 3-6 2013.

#### 2013-09-19: Joseph Dureau & Thierry Dumont (room 01 of ENSCP, in front of IHP)

publié le 26 août 2013 à 02:04 par Nicolas Chopin   [ mis à jour : 12 sept. 2013 à 07:17 ]

 * 3pm: Joseph DureauThe Public Library of Models: an open source project towards social modeling Taking the example of epidemiology, the vagaries of mathematical formulations and practical implementations create frictions that prevent modelers from sharing their work, comparing their results, and making them easily actionable to serve decision-making. The objective of the plom-pipe library is to reduce this technical friction, providing a high-level grammar for the definition of a wide class of state-space models. For models falling within this grammar, corresponding simulation code and Bayesian inference algorithms can be automatically generated in parallelisable C. The definition of standard mathematical formulations and implementations allows wide and systematic comparison between models, facilitating model selection. In addition, inference pipelines can be constructed by leveraging approximate and exact mathematical formulations of the model, for an efficient and automated exploration of posterior probability densities. Lastly, this standardisation process has permitted the development of a web portal for modelers to publish, share, and review their work in real time.* 4.15pm: Thierry Dumont Indoor localization : a simultaneous localization and mapping algorithm Performing indoor localization is a challenging task due to the unpredictable behaviour of radio waves in confined environments. The accuracy strongly relies on the quality of the propagation maps estimation. Usually, such estimation is performed deterministically or using an off-line measurement campaign. However, these techniques do not take into account the environmental dynamics responsible for accuracy degradation. I will present an on-line estimation procedure developed by S. Le Corff and myself. The procedure relies on a semi-parametric propagation model that considers the influence of the environment on the waves propagation. The inference task is performed on-line using the information sequentially thanks to an on-line Expectation-Maximisation algorithm. The performance of the algorithm is illustrated using both simulated data and a true data set.

1-9 of 9