BL-PRO 15

BL-PRO 15


Practical information
  • Organizers : Giovanni Peccati (ULux) and Yvik Swan (ULg) 
    Local organizers : Danielle Bartholomeus (secretary, ULg), Gentiane Haesbroeck (ULg) and Yvik Swan (ULg)
  • The list of participants is here
  • The workshop took place at the "salle des Professeurs" on the first floor of the central building (building a1) of the Place du 20 Août at the center of Liège. See this link for explanations (in french). 

Program

Talks are 45 minutes long + 5 minutes questions. Short courses are 2 hours long with a 10 minute break in the middle. 

Thursday 29/01/2015

10h00 : Short course by Ivan Nourdin (Luxembourg) - [slides part 1 - part 2]

12h00 : Lunch
    
14h00 : Christian Döbler (Luxembourg) - [slides]
    
14h50 : Martin Lotz (Manchester) - [slides]
    
15h40 : coffee break
    
16h00 : Nathalie Eisenbaum (Paris 6)
    
16h50 : Guy Latouche (Brussels) - [slides]
    
19h00 : Dinner

Friday  30/01/2015

09h00 : Marguerite Zani (Orléans) - [slides]
    
09h50 : Mladen Savov (Bulgaria) - [slides]
    
10h40 : coffee break
    
11h10 : Antoine Gloria (Brussels) - [slides]
    
12h00 : Lunch
    
14h00 : Short course by Stéphane Boucheron (Paris 7) - [slides]

Abstracts 


Stéphane Boucheron (Paris 7) : " Concentration inequalities with some statistical illustrations"

AbstractThe concentration of measure phenomenon implies that functions of many independent  random variables that do not depend too much on any of them are (almost) constant. This observation is the source of many concentration inequalities that generalize the classical exponential inequalities for sums of independent random variables (Hoeffding, Bennett, Bernstein) to more general functions. Those general functions may originate from random combinatorics, analysis of algorithms, and naturally from statistics. Indeed, the search for well behaved risk estimates in model selection has been one of the driving forces behind the development of concentration of measure theory during the last two decades. Even though concentration inequalities might not be necessary ingredients in all developments, they have facilitated many advances in Machine Learning and Mathematical Statistics.

The aim of this mini-course  is to provide a flavor of techniques that may be used when derriving concentration inequalities: I will describe two techniques that may be portrayed  as variations on the Jackknife: the entropy method and Stein's method. Second I will illustrate the use of concentration inequalities in statistics  and machine learning  by providing illustrations on the analysis of empirical risk minimization and on the construction of adaptive estimates of the extreme value index. 

No prior knowledge is required, except to be familiar with probability theory at graduate level.
 
Christian Döbler (Luxembourg) : "Recurrence for the frog model with drift on Z^d"

AbstractThe frog model is a certain system of interacting random walks. At time zero, there is exactly one active frog at the origin and at all other vertices there are certain numbers of sleeping frogs. The active frog now starts a nearest neighbour random walk and once it hits a vertex, which has not been visited before, all the sleeping frogs there get activated and start independent random walks by themselves and also start activating sleeping frogs. The model is called recurrent, if, with probability one, infinitely many active frogs return to the origin, otherwise it is called transient. We present recent results and also pieces of current research on the recurrence question in the case that the underlying random walk has a drift to the right.

Nathalie Eisenbaum (Paris 6) : "On permanental vectors"

Abstract: A permanental vector with a symmetric kernel and index 2 is a vector (η12, η22, …, ηn2) with  (η1, η2, …, ηn) centered Gaussian vector. The definition of permanental vector is a natural extension of this definition to non-symmetric kernels and positive indexes. What is the interest of this objects?  How far do they extend the definition of squared Gaussian vectors? The property of infinite divisibility will play a crucial part in our answers. 

Antoine Gloria (Brussels) : "When stochastic homogenization meets Stein: a quantitative CLT for the effective conductance." Joint work with J. Nolen (Duke)

Abstract We study a random conductance problem on a d-dimensional discrete torus of size L>0. The conductances are independent, identically distributed random variables uniformly bounded from above and below by positive constants. The effective conductance AL of the network is a random variable, depending on L, and the main result is a quantitative central limit theorem for this quantity as L→∞. In terms of scalings we prove that this nonlinear nonlocal function AL essentially behaves as if it were a simple spatial average of the conductances (up to logarithmic corrections). The main achievement of this contribution is the precise asymptotic description of the variance of AL.

Guy Latouche (Brussels) : "The deviation matrix, Poisson's equation, and QBDs"

AbstractThe deviation matrix is closely related to the solutions of Poisson's equation and plays an important, if largely unsung, role in the analysis of Markov chains.  Suffices to recall its connections to the sensitivity analysis of the stationary distribution of a Markov chain, and to the Central Limit theorem for Markov chains. When the state space is finite,  the deviation matrix is the group inverse of I-P in discrete time, where P is the transition matrix; in continuous time, it is the group inverse of the generator.

As is often the case in Markov chains theory, Poisson's equation may be solved by purely algebraic arguments, or by following a probabilistic approach.   I shall focus on quasi-birth-and-death processes (QBDs) and I shall show how one may exploit the special transition structure of QBDs, and the physical interpretation of the deviation matrix, in order to obtain a computationally useful expression.  Time permitting, I shall conclude with the characterization of all the solutions of Poisson's equation.

Ivan Nourdin (Luxembourg) : "Normal approximation of conic intrinsic volumes, with applications to phase transitions in compressed sensing"

Abstract: A phase transition is a sharp change in the behavior of a computational problem as its parameters vary. The example that is probably most familiar to anyone who has looked at a paper about compressed sensing or sparsity in the last decade is phase transition that occurs in the sparse linear inverse problem at random data, also known as the compressed sensing problem. More generally, recent empirical research indicates that many convex optimization problems with random constraints exhibit a phase transition as the number of constraints increases.

High-dimensional geometry, both the discrete and convex branches of it, has experienced a striking series of developments in the past 5 years. Very recently, Amelunxen, Lotz, McCoy and Tropp presented the first rigorous analysis that explains the afore-mentioned phase transition phenomenon. They also developed specific tools for calculating the location and the width of the transition region. The general idea behind this fascinating new theory is that there exists a parameter, called the statistical dimension, that reflects the intrinsic complexity of a signal. Reconstruction succeeds with overwhelming probability when the number of measure is larger than the statistical dimension ; otherwise, it fails with high probability.

In a joint paper with Goldstein and Peccati, we have shown that most conic intrinsic volumes encountered in applications can be approximated by a suitable Gaussian distribution. Our results explicitly connect the sharp phase transitions with the asymptotic Gaussian fluctuations of the intrinsic volumes of the associated descent cones.

The aim of this short course is to give the audience an understanding of all the aforementionned aspects. No prior knowledge is required, except to be familiar with probability theory at graduate level.

Martin Lotz (Manchester) : "Geometric Probability, Linear Inverse Problems, and Convex Optimization"

AbstractIn this task I give an overview of some applications of spherical integral geometry and geometric probability to fundamental problems in convex optimization, such as the solution of linear inverse and demixing problems by convex regularization, and in particular compressive sensing. Two of the topics discussed will be the phase transition phenomenon in convex optimization with random constraints, and the analysis of condition numbers by means of Gaussian comparison inequalities.

Mladen Savov (Reading) : "Some thoughts on spectral theory of non-self-adjoint Markov processes with emphasis on the class of Laguerre semigroups."

AbstractIn this talk we will present a methodology that enables us  to develop the spectral theory for a class of non-self-adjoint semigroups, which we call the Laguerre semigroups.  The latter can be directly linked to the semigroups of the positive self-similar Markov processes thereby allowing in some instances the computation of the transition kernel. We will discuss the main tools behind our approach and the extent of our results. We will also discuss how these developments might be looked from the perspective of more general non-self-adjoint Markov groups.

Marguerite Zani (Orléans) : "Approximation complexity of additive random fields"

Abstract: We consider approximation problems for tensor products and additive random fields in the average case setting. The main question we are concerned with is "How much do we loose by considering standard information algorithms against those using general linear information?" We also study the probabilistic setting of the mentioned problem for tensor products.
     

Accommodation in Liège

Here are some hotels in Liège which practice special prices for the university of Liège : 

Some pictures

Thank you Rola Zintout for these and these and these and also these.  

Financial support

The workshop is financially supported by Luxembourg University (Grant F1R-MTH-PUL-12PAMP (PAMPAS)), ULg (via a Soutien aux Entités de Recherche from the ARD), IAP Research Network P7/06 of the Belgian State (Belgian Science Policy) and the Belgian Statistical Society (SBS - BVS). 

The two short courses are financed by EDT Math and EDT Stat.