Séance du 18 février 2019

Séance organisée par Estelle Kuhn et Mathilde Lougeot

Lieu : IHP, Amphi Hermite

14.00 : Benoit Henry (Université de Lille 1, laboratoire Paul Painlevé)

Titre : Estimation paramétrique et non-paramétrique pour la loi de durée de vie de populations avec vieillissement

Résumé : Dans cet exposé, nous introduirons un modèle de dynamique des populations dans lequel les individus vivent et se reproduisent de manière i.i.d. avec reproduction Poissonnienne. Cependant, nous supposerons que leurs durées de vie suivent une loi arbitraire. Si on observe les temps de vie de tous les individus, il est alors facile (si la population ne s'éteint pas trop rapidement) d'estimer cette loi de durée de vie. L'objectif de cet exposé est de comprendre si cette loi peut être estimée lorsqu'on observe uniquement l'évolution au cours du temps de l’effectif total de la population. Ceci revient a supposé que seul le processus $(N_{t},\ t\in\mathbb{R}_{+})$ comptant le nombre d'individus vivant à un instant $t$ est observé. Ce processus est un processus de branchement, généralement non-Markovien, dit de Crump-Mode-Jagers (binaire et homogène).

15.00 : Alain Durmus (ENS Paris Saclay, CMLA)

Titre : On the convergence of the Hamiltonian Monte Carlo algorithm and other irreversible MCMC methods

Résumé : Hamiltonian Monte Carlo is a very popular MCMC method amongst Bayesian statisticians to get samples from a posterior distribution. This algorithm relies on the discretization of Hamiltonian dynamics which leave the target density invariant combined with a Metropolis step. In this talk, we will discuss convergence properties of this method to sample from a positive target density p on $R^d$ with either a fixed or a random numbers of integration steps. More precisely, we will present some mild conditions on p to ensure φ-irreducibility and ergodicity of the associated chain. We will also present verifiable conditions which imply geometric convergence. We will conclude with the introduction of new exact continuous time MCMC methods, and in particular the Bouncy Particle Sampler for which new theoretical results will be given.

16.00 : Guillaume Garrigos (Université Paris Diderot, LPSM)

Titre : Model Consistency for Learning with low-complexity priors

Résumé : We consider supervised learning problems where the prior on the coefficients of the estimator is an assumption of low complexity (such as low rank or structured sparsity). An important question in that setting is the one of model consistency. It consists in keeping the correct structure (for instance support or rank) by minimizing a regularized empirical risk, or taking an approximate solution (e.g. computed by an algorithm). It is known in Inverse Problems that model consistency holds under appropriate non-degeneracy conditions. However such conditions typically fail for highly correlated designs (typical in learning) and it is observed that regularization methods tend to select larger models. It is also known that deterministic algorithms succeed in identifying the structure of the solution, while a simple stochastic gradient method fails to do so. In this talk, we provide the theoretical underpinning of this behavior using the notion of mirror-stratifiable regularizers, which cover many practical cases. We will see that in general the complexity of approximate solutions obey a "sandwich" principle, in the sense that they belong between two strata: a small one corresponding to the complexity of the expected estimator, and a larger one associated to a certain dual certificate. We will also see that for this result to hold, it is important to use a stochastic algorithm having a reduced variance.