Organization: Jean-René Chazottes, Michel Ledoux, Frank Redig Contents: 1. Presentation 2. The five mini-courses 3. Speakers (eleven plenary talks) You can download some of the mini-courses and some of the talks here.1. Presentation. The workshop will take place at CIRM (Centre International de Rencontres Mathématiques), Marseille, France.The aim of this workshop is to bring together people from different areas where concentration inequalities play an important role, but which evolved rather independently. We believe that the workshop can thus create a stimulating exchange of information and be a starting point of new research, sustaining future projects and collaborations. More precisely, we aim at bringing together people from probability and statistics, discrete mathematics, and statistical mechanics who are interested in concentration inequalities and their applications. In recent years, concentration inequalities have been intensely studied and used as a powerful tool in various areas such as convex geometry, asymptotic geometric analysis, statistical physics (spin-glasses, Gibbs measures), dynamical systems, probability (information theoretic inequalities, random matrices, Markov processes, random graphs, percolation, quantum information), statistics (oracle inequalities, empirical distribution, model selection), learning theory and randomized algorithms. There will be mini-courses during the mornings. The talks will cover more specific subjects. In the afternoons there will be time available for individual discussions and to fuel new collaborations. 2. The five mini-courses are the following:Sourav CHATTERJEE (Courant Institute, New York, USA): Computation of variance upper bounds and related problemsAbstract: The primary job of a probabilist is to compute probabilities. When that is not so easy, a simpler target is to compute expectations and variances. The purpose of this mini-course is twofold: first, to point out that in many modern problems, we do not know how to obtain the correct order of the variance. Examples include problems from spin glasses, polymer models, first-passage percolation, etc. Second, to explain how the use of hypercontractivity and other techniques can lead to improvements of the classical variance bounds in these problems, which in turn lead to interesting and mysterious conclusions such as the phenomena of chaos and multiple valleys. Arnaud GUILLIN (University of Clermont-Ferrand, France): Functional inequalities: from basics to recent developmentsAbstract: We will present during these three lectures basics from usual functional inequalities, namely two of their most important applications : concentration phenomenon and convergence to equilibrium for the natural associated process (related also to tensorization, perturbation,...); and well known conditions to verify them (Bakry-Emery, Hardy inequalities,...). We will also try to cover recent progresses related to conditions ensuring these inequalities (Lyapunov condition, large deviations,...). The three lectures will be focused (and thus divided) on three different (but linked) inequalities : Poincaré (and weak versions), logarithmic Sobolev and transportation-information inequalities. Pascal MASSART (University Paris XI, Orsay, France): Model selection : a non asymptotic viewAbstract: Model selection is a classical topic in statistics. The idea of selecting a model via penalizing a log-likelihood type criterion goes back to the early seventies with the pioneering works of Mallows and Akaike. One can find many consistency results in the literature for such criteria. These results are asymptotic in the sense that one deals with a given number of models and the number of observations tends to infinity. We shall give an overview of a non asymtotic theory for model selection based on concentration inequalities. In various contexts of function estimation it is possible to design penalized log-likelihood type criteria with penalty terms depending not only on the number of parameters defining each model (as for the classical criteria) but also on the "complexity'' of the whole collection of models to be considered. For practical relevance of these methods, it is desirable to get a precise expression of the penalty terms involved in the penalized criteria on which they are based. Our approach heavily relies on concentration inequalities, the prototype being Talagrand's inequality for empirical processes which leads to explicit forms for penalties. Simultanuously, we derive non asymptotic risk bounds for the corresponding penalized estimators showing that they perform almost as well as if the "best model'' (i.e. with minimal risk) were known. Our purpose will be to give an account of the theory through nowadays standard results but also provide some recent advances. We shall especially introduce the notion of minimal penalty and discuss the possibility of designing data-driven penalties through some heuristics that lead to open problems. Alessandro PANCONESI (La Sapienza, Roma)Randomized Algorithms: Looking for certain uncertaintyAbstract: In computer science, randomized algorithms are everywhere. This ubiquity is a measure of their success: they are faster and stronger, and sometimes can even solve problems that are unattainable for their deterministic counterpart. If one seeks control over the behaviour of a randomized algorithm (eg its running time) concentration inequalities are very useful, especially when they can deal with lack of independence, which is the typical situation in the analysis of algorithms. The surprising end result is that, in spite of its in-built randomness, the uncertainty of a randomized algorithm can be for all practical purposes eliminated. We will discuss the ubiquity of randomized algorithms in contemporary computer science and see how concentration inequalities can be of invaluable help when analysing them. Devdatt P. DUBHASHI
(Chalmers University, Sweden):Johnson-Lindenstrauss, Compressed Sensing and Machine LearningAbstract: We will discuss the classical Johnson-Lindenstrauss (JL) Lemma on random projections giving both concentration results and its algorithmic applications. We will then connect it to recent developments in compressed sensing and machine learning: (1) we will show the connections of the JL Lemma to the restricted isometry property (RIP) in compressed sensing and (2) show how the JL Lemma together with random sampling can be used to scale up support vector machine (SVM) based algorithms for classification. 3. Speakers (plenary talks): The eleven speakers are G. Aubrun, R. van den Berg, F. Bolley, P. Cattiaux, P. Collet, S. van de Geer, L. Kontorovich, C. Léonard, M. Luczak, E. Milman, P.-M. Samson. Titles and abstracts:
Guillaume Aubrun: "Concentration of measure and entropy of quantum channels" Abstract: We investigate the problem of estimating the minimum output entropy of random quantum channels. This is an important topic in quantum information theory since random channels were used by Hastings to disprove a major conjecture about additivity of channel capacities. We show how to use concentration of measure techniques (in the form of Dvoretzky's theorem) in order to obtain sharp bounds on channel entropies. Rob van den Berg: "Influence and variance inequalities in (first-passage) percolation" Abstract: Inequalities by Talagrand (and related inequalities by Kahn, Kalai and Linial, and by Friedgut and Kalai) have been successfully applied to percolation theory by various researchers. After a short introduction I will focus on joint work with D. Kiss concerning upper bounds for the variance of the travel-time in certain dependent first-passage percolation models. In the case where the underlying collection of single-edge crossing times is a Markov random field, our arguments work under a so-called high-noise condition. Due to recent work with R. Conijn on 2-dimensional Markov fields, we can replace this by a weaker condition. François Bolley: "Convergence to equilibrium in Wasserstein distance for Fokker-Planck equations" Abstract: The convergence to equilibrium for the law of sdes, or solutions to Fokker-Planck type diffusion pdes, can be studied and made quantitative by means of functional inequalities, as in the lectures by A. Guillin : these functional inequalities relate Liapunov functionals (such as a L^2 norm, or the relative entropy to the invariant measure) with their dissipation along the evolution. Here we shall study the dissipation of the (Monge) Wasserstein distance between a solution and the invariant measure, and deduce new results on the long time convergence of the solution, both in linear and nonlinear cases. It is based on a work with I. Gentil and A. Guillin. Patrick Cattiaux: "Deviation bounds and fluctuations for additive functionals of Markov processes" Abstract: Let (X_s)_s be an ergodic Markov process with stationary distribution µ. In this talk we shall show how to use functional inequalities to obtain deviation bounds P_µ {1/t int_0^t f(X_s) ds > a} for zero µ-mean f. We shall also describe how to describe fluctuations, including convergence to a stable process. Pierre Collet: "Concentration for dynamical systems, a review" I will present some results on concentration for SRB measures of some classes of dynamical systems, uniformly hyperbolic or not. The singular nature of the measure on the trajectories requires some particular techniques for the proofs. I will also mention some applications to observables which are of particular interest for dynamical systems. Sara van de Geer: "The Bernstein-Orlicz norm and deviation inequalities" Abstract: We introduce two new concepts designed for the study of empirical processes. First, we introduce a new Orlicz norm which we call the Bernstein-Orlicz norm. This new norm interpolates sub-Gaussian and sub-exponential tail behavior. In particular, we show how this norm can be used to simplify the derivation of deviation inequalities for suprema of collections of random variables. Secondly, we introduce chaining and generic chaining along a tree. These simplify the well-known concepts of chaining and generic chaining. The supremum of the empirical process is then studied as a special case. We show that chaining along a tree can be done using entropy with bracketing. Finally, we establish a deviation inequality for the empirical process for the unbounded case. Leonid Kontorovich: "An Explicit Bound on the Transportation Cost Distance" Abstract: We give what appears to be the ﬁrst explicit, easily computable bound on the transportation cost distance with respect to the weighted Hamming metric. The bound follows from Kantorovich duality and a novel inequality, which amounts to bounding the maximal value of certain linear programs and may be of independent interest. We give two application to concentration of measure for dependent processes and pose some open problems and directions for future work. Christian Léonard: "Entropic interpolations" Abstract: McCann's displacement interpolations between two probability measures solve a quadratic optimal transport problem. It is known (Otto-Villani) that under some curvature-dimension bound they allow to derive HWI inequalities which in turns imply concentration of measure inequalities. Entropic interpolations between two probability measures solve an entropy minimization problem which is similar to an optimal transport problem. This statistical physics problem was addressed by Schrödinger in 1931, motivated by some analogy with quantum mechanics. We compare some aspects of these parallel notions of interpolation and give some details about their consequences in terms of concentration of measure. Malwina Luczak: "Differential equation approximation for a network routing model" Abstract: We consider the Markov chain associated with a dynamic load balancing schme known in the literature as the BDAR algorithm. We show that the chain exhibits strong concentration of measure, and that its evolution may be approximated by a differential equation. Part of my aim will be to explain our techniques, which we hope may be applicable in a variety of other settings. Emmanuel Milman: "Transference Principles for Log-Sobolev and Spectral-Gap Inequalities with Applications to Conservative Spin Systems" Abstract: We plan to survey several old and new methods for transferring log-Sobolev and spectral-gap (or Poincaré) inequalities from a source metric-measure space to a target one, when the target space's curvature is bounded below; our main tool is the equivalence between concentration and isoperimetric inequalities in the latter setting. As our main application, we obtain explicit estimates for the log-Sobolev and spectral-gap constants of a "conservative spin system", consisting of non-interacting (or weakly-interacting) particles, constrained to conserve the mean-spin. When the self-interaction is a perturbation of a strongly convex potential, this partially recovers and partially extends previous results of Caputo, Chafaï, Grunewald, Landim, Lu, Menz, Otto, Panizo, Villani, Westdickenberg and Yau. When the self-interaction is only assumed to be (non-strongly) convex, as in the case of the two-sided exponential measure, we obtain sharp estimates on the system's spectral-gap as a function of the mean-spin, independently of system size. Paul-Marie Samson: "Around the inversion of the hierarchy : `log-Sobolev inequality' implies `Talagrand's transport inequality' implies `concentration property'" Abstract: First, the Gaussian concentration property is known to be a consequence of Talagrand's transport inequality T_2. We will show that conversely, the Gaussian concentration property can be interpreted as a non-tight Talagrand's transportation property. This inequality can be simply tightened to recover T_2 when the concentration property is dimension free. Secondly, Otto-Villani's theorem and recent developments on non smooth structures (such as metric spaces) indicate that the log-Sobolev inequality implies the transport inequality T_2. Actually this property T_2 can be characterized as a log-Sobolev inequality restricted to a class of d^2-convex functions. With this new characterization, we obtained that the Talagrand's transport inequality T_2 is stable by bounded perturbation of the reference measure. The talk is based on works with N. Gozlan and C.Roberto. |