Statistical physics and the Ising model

Instructors: Paul Melotti (Université Paris-Saclay) and Mendes Oulamara (formerly IHES)

Description: Statistical physics have gained a crucial role in modern probability theory over the last decades, with many mathematicians building upon and extending results from physics. We will focus on a central and important model, introduced by Lenz and Ising in the 1920s, that initially describes the structure of ferromagnets and their phase transitions, but can be applied in a surprisingly wide and ever-expanding array of situations. Its study gives a direct and hands-on approach to many topics in statistical physics.
During the first week, we will introduce its classical theory, with examples in one dimension and on the complete graph (Curie-Weiss model), and discover the notions of phase transition, Gibbs measures, free energy, criticality and critical exponents, among others.

During the second week, we will explore recent breakthroughs, including connections with the random cluster model, the so-called integrability of the model in dimension two (with Kac-Ward theory and relations to discrete geometry), and elements of conformal invariance at criticality.



Invariant measures in symbolic dynamics  

Instructors: Thierry Monteil (CNRS, Université Sorbonne Paris-Nord) and Jean-Baptiste Stiegler (Université Paris-Saclay)     

Description: Symbolic dynamics allows a combinatorial approach in the study of dynamical systems. In this course, we will introduce some basics of topological and measurable dynamics, through the lens of symbolic coding. We will focus on the simplex of invariant measures, showing an interplay between probability and combinatorics.

During the first week, we will introduce important concepts in the study of dynamical systems (minimality, ergodicity, etc) and establish a dictionary with their symbolic counterparts (uniform recurrence, frequencies, etc). We will then see how the geometry of the Rauzy graphs associated to a symbolic dynamical system can be used to provide bounds the number of ergodic invariant measures of the system.

During the second week, we will focus on recent developments, including some applications to the study of continued fraction algorithms, interval exchange transformations, billiards, piecewise isometries, and tilings.



Mathematical Foundations of Geometric Deep Learning and Applications

Instructor: Tolga Birdal (Imperial College London)                                  

Description : Geometric Deep Learning is a subfield of Deep Learning that focuses on developing artificial neural networks for data with non-Euclidean structures, such as graphs and manifolds in order to enable real-world applications involving data with complex, irregular geometries. In particular, the field focuses on analyzing neural networks based on the geometric priors they leverage to combat the curse of dimensionality by modeling signals on domains endowed with symmetry groups, which serve as inductive biases for the network. In this part of the workshop, Tolga will not focus directly on (Geometric) Deep Learning and artificial neural networks. Instead, the objective is to provide the necessary preliminary mathematical background often overlooked in standard computer science curricula. Certain topics from graph theory, group theory, vector calculus, spectral theory and fundamentals of learning theory will be covered to the extent that it is required for the grasp of the topic. 

During the second week, we shall focus on certain recent works in the field and some applications. Invariances and symmetries arise all too commonly across data originating in the real world. Hence, it should come as no surprise that some of the most popular applications of machine learning in the 21st century have come about as a direct byproduct of Geometric Deep Learning (GDL), perhaps sometimes without fully realising this fact. This part of the series will provide an overview—by no means comprehensive—of influential works in Geometric Deep Learning as well as exciting and promising new applications. The motivation is twofold: to demonstrate specific instances of scientific and industrial problems where the five geometric domains commonly arise, and to serve additional motivation for further study of Geometric Deep Learning principles and architectures.