Lieu : IHP, amphi Hermite
14.00 : Alisa Kirichenko (University of Oxford)
Titre : Bayesian generalized modeling when the model is wrong
Résumé : Over the last years it became clear that Bayesian inference can perform rather poorly under misspecification. A possible remedy is to use a generalized Bayesian method instead, i.e. to raise the likelihood in the Bayes equation to some power, which is called a learning rate. In this talk I present results on the theoretical and empirical performance of generalized Bayesian method. I discuss the conditions under which the posterior with a suitably chosen learning rate concentrate around the best approximation of the truth within the model, even when the model is misspecified. In particular, it can be shown that these conditions are satisfied for General linear models (GLMs). Suitable inference algorithms (Gibbs samplers) for computing generalized posteriors in the context of GLMs are devised, and the experiments show that the method significantly outperforms other Bayesian estimation procedures on both, simulated and real data.
15.00 : Vincent Divol (Université Paris Saclay)
Titre : Wasserstein minimax estimation on manifold
Résumé : Assume that we observe i.i.d. points lying close to some unknown d-dimensional submanifold of class C^k in a possibly high-dimensional space R^D. We study the problem of reconstructing the probability distribution generating the sample. After remarking that this problem is degenerate for a large class of standard losses (L_p, Hellinger, Kulback-Leibler, etc.), we focus on the Wassterstein loss, for which we build an estimator, based on kernel density estimation, whose rate of convergence depends both on d, k and on the regularity s of the underlying density, but not on the ambient dimension D. This estimator is shown to be minimax and attains considerably faster rate than the naive empirical estimator
16.00 : Olga Klopp (Essec - CREST)
Titre : Concentration matricielle et quelques applications
Résumé : Dans cet exposé, je présenterai quelques applications de la concentration de matrices. Je commencerai par rappeler l'inégalité matricielle de Bernstein, qui est probablement le résultat le plus précieux sur la concentration matricielle. Ensuite, je montrerai les applications de l'inégalité matricielle de Bernstein dans plusieurs problèmes d'approximation de matrice empirique en complétion de matrices, pour la prédiction de liens dans les réseaux et la modélisation de sujets.