Spring 2023

Feb 10. David Gamarnik (MIT)

Title: Overlap gap property: A topological barrier to optimizing over random structures

Abstract: Many decision and optimization problems over random structures exhibit a gap between the existentially optimal values and algorithmically achievable values. This is dubbed as the so-called statistics-to-computation gap. Examples include the problem of finding a largest independent set in a random graph, the problem of finding a near ground state in a spin glass model, the problem of finding a satisfying assignment in a random constraint satisfaction problem, and many many more. Unfortunately,  no formal computational hardness results exist which explain this persistent algorithmic gap. 

In the talk we will describe a new approach for establishing an algorithmic intractability for these problems called the overlap gap property. Originating in statistical physics, and specifically in the theory of spin glasses, this is a simple to describe property which a) emerges in most models known to exhibit an apparent algorithmic hardness; b) is consistent with the hardness/tractability phase transition for many models analyzed to the day; and, importantly, c) allows to mathematically rigorously rule out a large class of algorithms as potential contenders, specifically the algorithms which exhibit a form of stability/noise insensitivity. 

We will specifically show how to use this property to obtain stronger than the state of the art lower bounds on the depth and size of Boolean circuits for solving two of the aforementioned problems: the problem of finding a large independent set in a sparse random graph, and the problem of finding a near ground state of a p-spin model.

Joint work with Aukosh Jagannath and Alex Wein


Mar 03. Nhu Nguyen (University of Rhode Island)

Title: Stochastic Functional Differential Equations: A Complete Characterization of Longtime Behaviors and Applications.

Abstract: We consider a system of stochastic functional differential equations, where the drift/diffusion coefficients depend on the past information of the process. By using some ideas from dynamical systems theory, we are able to provide a complete characterization of the longtime behaviors of the underlying systems, which classifies whether or not a component converges to 0 or converges to an invariant measure with a positive support. The proof is obtained by combining some techniques in stochastic processes in infinite dimensional spaces, stochastic calculus, and coupling methods. Some examples and applications are also given to illustrate the results.


Apr 14.  Giorgio Cipollini (Princeton University)

Title: How do the eigenvalues of a large non-Hermitian random matrix behave?

Abstract: We prove that the fluctuations of the eigenvalues converge to the Gaussian Free Field (GFF) on the unit disk. These fluctuations appear on a non-natural scale, due to strong correlations between the eigenvalues. Then, motivated by the long time behaviour of the ODE \dot{u}=Xu, we give a precise estimate on the eigenvalue with the largest real part and on the spectral radius of X.


Apr 21. Dan Mikulincer (MIT)

Title: Lipschitz properties of transport maps under a log-Lipschitz condition

Abstract: Consider the problem of realizing a target probability measure as a push forward, by a transport map, of a given source measure. Typically one thinks about the target measure as being 'complicated' while the source is simpler and often more structured. In such a setting, for applications, it is desirable to find Lipschitz transport maps that afford the transfer of analytic properties from the source to the target. The talk will focus on Lipschitz regularity when the target measure satisfies a log-Lipschitz condition.

I will present a construction of a transport map, constructed infinitesimally along the Langevin flow, and explain how to analyze its Lipschitz constant. The analysis of this map leads to several new results which apply both to Euclidean spaces and manifolds and which, at the moment, seem to be out of reach of the classically studied optimal transport theory. 

Joint work with Max Fathi and Yair Shenfeld.

Apr 28. Samy Tindel (Purdue University)

Title: Some rough paths techniques in reinforcement learning

Abstract: In this talk I will start by reviewing some classical results relating machine learning problems with control theory. I will mainly discuss some very basic notions of supervised learning as well as reinforcement learning. Then I will show how noisy environments lead to very natural equations involving rough paths. This will include a couple of motivating examples. In a second part of the talk I will try to explain the techniques used to solve reinforcement learning problems with a minimal amount of technicality. In particular, I will focus on rough HJB type equations and their respective viscosity solutions. If time allows it, I will give an overview of our current research program in this direction. 

This talk is based on an ongoing joint work with Prakash Chakraborty (Penn State) and Harsha Honnappa (Purdue, Industrial Engineering).

May 05. Venkat Anantharam (UC Berkeley)

Title: Data-derived weak universal consistency for lossless compression

Abstract: Rich model classes for data may be too complex to admit uniformly consistent estimators. In such cases, it is conventional to settle for pointwise consistent estimators.  But this viewpoint has the practical drawback that estimator performance is a function of the unknown model within the model class that is being estimated. Even if an estimator is consistent, how well it is doing at any given time may not be clear, no matter what the sample size of the observations. 

We explore how to resolve this issue by studying model classes that may only admit pointwise consistency guarantees, yet enough information about the unknown model driving the observations needed to gauge estimator accuracy can be inferred from the sample at hand. We would then say that such model classes admit data-derived weak universally consistent estimators. 

In this work we flesh out this philosophy in the framework of lossless data compression problems over a countable alphabet. Our main contribution is to characterize the model classes that admit data-derived weak universally consistent lossless compression in terms of the presence or not of what we term deceptive distributions (whether a distribution is deceptive or not is defined in the context of the model class). We also show that the ability to estimate the redundancy of compressing memoryless sources is equivalent to learning the underlying single-letter marginal in a data-derived fashion. 

This is joint work with Narayana Prasad Santhanam and Wojtek Szpankowski.