Stefano Allesina - UChicago
Will a large complex system be stable? Will a stable complex system be large?
In 1972, May famously asked "Will a Large Complex System be Stable?", and proved that large ecological systems would invariably be unstable from a dynamical standpoint. I ask a similar question: take a complex, stable ecological system: how many species will it support? I present two cases in which this calculation can be done explicitly. First, I examine a Generalized Lotka-Volterra model with random parameters and show that, under certain conditions, the expected number of coexisting species is half of the number of introduced species. Next, I consider Tilman's model for metacommunities with a colonization-competition trade-off. In this highly structured model, species that are superior colonizers of empty patches are inferior competitors, and vice-versa. Remarkably, I show that also in this case the expected number of species coexisting is half of the initial pool. Finding the same result for completely unstructured and strongly structured models suggests that large ecological communities could be observed without requiring any fine tuning of the species' traits/parameters.
Maria Chiara Angelini - Università di Roma Sapienza
Monte Carlo vs Stochastic Gradient descent in inference problems
Is Stochastic Gradient Descent (SGD) substantially different from Metropolis Monte Carlo dynamics? This is a fundamental question to understanding the most used training algorithm in the field of Machine Learning, but it had no answer until now. I will focus on sparse, discrete optimization and inference problems. Firstly I will show how to compute the recovery threshold for Monte Carlo algorithms, and then I will show that the dynamics of an SGD-like algorithm closely resemble that of Metropolis Monte Carlo with a properly chosen temperature, which depends on the mini-batch size. This quantitative matching holds both at equilibrium and in the out-of-equilibrium regime, allowing us to use results about performances and limits of Monte Carlo algorithms to optimize the mini-batch size in the SGD-like algorithm.
Silvio Franz - Université Paris-Scalay
Title
Abstract
Marylou Gabrié - École polytechnique
Assisting sampling of physical systems with generative models
Deep generative models parametrize very flexible families of distributions able to fit complicated datasets of images or text. These models provide independent samples from complex high-distributions at negligible costs. On the other hand, sampling exactly a target distribution, such as the Boltzmann distribution of a physical system, is typically challenging: either because of dimensionality, multi-modality, ill-conditioning or a combination of the previous. In this talk, I will discuss opportunities and challenges in enhancing traditional inference and sampling algorithms with learning.
Surya Ganguli - Stanford
Title
Abstract
Irene Giardina - Università di Roma Sapienza
Title
Abstract
Sebastian Goldt - SISSA
Learning from higher-order correlations with neural networks
Abstract
Florent Krzakala - EPFL
Machine learning and statistical physics: Lessons from simple models
Abstract
Claudio Maggi - Nanotec, CNR
Continuous phase transitions in active particles systems
In this talk I will review some recent results regarding continuous phase transition in active particles systems. In the first part of the talk I will focus on simulations of active particles interacting via a quorum-sensing (QS) mechanism by which particles change their swimming speed based on the number of perceived neighbours. I will show that if these particles slow down enough when interacting, the system undergoes a full motility-induced separation (MIPS) and that the coexistence curve terminates into a critical point belonging to the Ising universality class [1]. In contrast I will also show recent results demonstrating that, if these QS interactions have two competing scales, the MIPS is destabilised and the system forms an active micro-emulsion which is well described by a renormalised field-theory in a effective equilibrium framework. Finally I will discuss some recent experiments in which super-paramagnetic colloids activated by a bath of swimming E. coli undergo two-dimensional melting out of equilibrium [2]. I will show how the basic physics of the experimental active crystal is well reproduced by a schematic model of active particles and how the KTHNY-theory remains qualitatively valid in describing the melting scenario of this active solid.
[1] N.Gnan and C. Maggi Soft Matter 18, 7654 (2022)
[2] H. Massana-Cid, C. Maggi et al. arXiv:2401.09911 (2024)"
Pierre Ronceray - Université Aix-Marseille
Learning the stochastic dynamics of biological matter
The dynamics of biological systems, from proteins to cells to organisms, is complex and stochastic. To decipher their physical laws, we need to bridge between experimental observations and theoretical modeling. Thanks to progress in microscopy and tracking, there is today an abundance of experimental trajectories reflecting these dynamical laws. Inferring physical models from imperfect experimental data, however, is challenging and currently remains a bottleneck to data-driven biophysics. In this talk, I will present a set of tools developed to bridge this gap and permit robust and universal inference of stochastic dynamical models from experimental trajectories. These methods are rooted in an information-theoretical framework that quantifies how much can be inferred from trajectories that are short, partial and noisy. They permit the efficient inference of dynamical models for overdamped and underdamped Langevin systems, as well as the inference of entropy production rates. I finally present early applications of these techniques, as well as future research directions.
Pierfrancesco Urbani - IPhT, CNRS
Dynamical mean field theory of learning
Training algorithms are key to the success of artificial and recurrent neural networks. However we know very little about them. The main example of these algorithms is stochastic gradient descent which is the workhorse of the deep learning technology. I will review how dynamical mean field theory can be used to gain some understanding of training algorithms in prototypical settings, both in the context of artificial neural networks as well as recurrent ones.