Séance du 22 mars 2021

Séance organisée par Alain Célisse et Céline Duval.

Séance diffusée via Zoom (lien ici).


14.00 : Pierre Jacob (Harvard University)

Titre : Couplings of Markov chains and the Poisson equation

Résumé : Many statistical adventures involve the task of sampling from probability distributions. In general, this task requires non-trivial computational methods. Markov chain Monte Carlo methods constitute a wide and popular class of algorithms that iteratively construct a sequence of random variables, with the guarantee that the distribution of interest is attained in the limit of the number of iterations. This talk will describe some old and new tools, based on couplings and the Poisson equation, to decide on the number of iterations to perform, taking bias and variance into account. Illustrations include a "donkey walk" related to Dempster-Shafer inference, a simple Markov chain to sample from conditional Bernoulli distributions, and Gibbs samplers for high-dimensional regression with shrinkage priors.


15.00 : Ayoub Belhadji (ENS Lyon)

Titre : Kernel quadrature and interpolation using determinantal sampling

Résumé : We study approximation problems in reproducing kernel Hilbert spaces (RKHS) using random nodes. More precisely, we focus on kernel quadrature and kernel interpolation for smooth functions living in an RKHS using nodes that follow the distribution of a determinantal point process (DPP) and mixtures of DPPs. The definition of these DPPs is tailored to the RKHS. We prove fast convergence rates that depend on the eigenvalues of the RKHS kernel. This unified analysis gives new insights on the rates of the quadratures and interpolations based on DPPs, especially for high dimensional numerical integration problems.


16.00 : Ulysse Marteau (INRIA)

Titre : Finding Global Minima via Kernel Approximations

Résumé : We consider the global minimization of smooth functions based solely on function evaluations. Algorithms that achieve the optimal number of function evaluations for a given precision level typically rely on explicitly constructing an approximation of the function which is then minimized with algorithms that have exponential running-time complexity. In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum. This is done by using infinite sums of square smooth functions and has strong links with polynomial sum-of-squares hierarchies. Leveraging recent representation properties of reproducing kernel Hilbert spaces, the infinite-dimensional optimization problem can be solved by subsampling in time polynomial in the number of function evaluations, and with theoretical guarantees on the obtained minimum. Given n samples, the computational cost is O(n^3.5) in time, O(n^2) in space, and we achieve a convergence rate to the global optimum that is O(n^(−m/d+1/2+3/d)) where m is the degree of differentiability of the function and d the number of dimensions. The rate is nearly optimal in the case of Sobolev functions and more generally makes the proposed method particularly suitable for functions that have a large number of derivatives. Indeed, when m is in the order of d, the convergence rate to the global optimum does not suffer from the curse of dimensionality, which affects only the worst-case constants (that we track explicitly through the paper).