Titles and Abstracts

(in alphabetical order of the speaker)

Marco Bertenghi: The elephant random walk and step-reinforced random walks

The elephant random walk (ERW) is a discrete-time nearest neighbour random walk on the integer lattice with infinite memory, in allusion to the traditional saying that an elephant never forgets. The ERW was introduced in the early 2000s by two physicists in order to investigate how long-range memory affects the behaviour of a random walker. In this talk, I will present how the ERW can be modelled via a Pólya urn and how said model can be used to obtain results on the asymptotic behaviour of the ERW and its multidimensional generalisation (MERW). Further, I will also introduce another generalisation of the ERW, the so-called step-reinforced random walk.

Nicolat Champagnat: Convergence of conditional distributions and Fleming-Viot particle systems to the minimal quasi-stationary distribution

We give general conditions ensuring that the Fleming-Viot process selects the minimal quasi-stationary distribution (QSD) for Markov processes with absorption, in cases where there might not be uniqueness of QSDs. We start by providing general criteria for the convergence of conditional distributions given non-absorption based on small set and Lyapunov conditions, that apply to cases where QSDs are not unique, typically processes in non-compact state spaces that do not come back fast from infinity. We then prove that, assuming soft killing and a stronger Lyapunov condition, in the limit of infinitely many particles, the stationary distribution of the Fleming-Viot system converges to the unique QSD that attracts all Dirac masses---the so-called minimal QSD. We apply this result to multi-dimensional birth and death processes, continuous-time Galton-Watson processes and diffusion processes with soft killing.

This is joint work with Denis Villemonais.

Dennis Chemnitz: Conditioned Lyapunov Spectrum

Condition Lyapunov exponents were introduced by Engel et. al. in an effort to describe the stability of a stochastic system as a local property. However, so far the existence of these conditioned Lyapunov exponents was only known for the dominant exponent. In this talk I will explain how a new framework, which connects random dynamical systems and conditioned Markov processes, can be used to establish the existence of the entire conditioned Lyapunov spectrum. This is joint work with Matheus M. Castro, Hugo Chu, Maximilian Engel, Jeroen S. W. Lamb and Martin Rasmussen."

Chris Dean: Pólya urns with growing initial compositions

A Pólya urn is a Markov process describing the contents of an urn that contains balls of d colours. At every time step, we draw a ball uniformly for the urn, note its colour, then put it back in the urn along with a set of new balls which depend on the colour drawn. The number of balls of colour j added when colour i is drawn is given by the (i,j)th entry of a predetermined replacement matrix R. The asymptotic behaviour of the urn for most replacement matrices can be inferred from the following two canonical cases. In the case of R the identity, the proportion of each colour in the urn tends to a Dirichlet distributed random variable with parameter given by the urn's initial composition. In the case of R irreducible, this limit is a deterministic vector that only depends on R. Fluctuations around these limits are also known.

Recently, Borovkov showed results on the asymptotic behaviour in the identity case, when the initial number of balls grows together with the number of time steps. In this talk, we show analogous results for the irreducible case. This will include the asymptotic behaviour of the proportion of each colour in the urn and the fluctuations around this limit.

Maximilian Engel: Computer-assisted proof of shear-induced chaos via quasi-stationary and quasi-ergodic measures

We confirm a long-standing conjecture concerning shear-induced chaos in stochastically perturbed systems exhibiting a Hopf bifurcation. The method of showing the main chaotic property, a positive Lyapunov exponent, is a computer-assisted proof. Using the recently developed theory of conditioned Lyapunov exponents on bounded domains and the modified Furstenberg-Khasminskii formula, the problem boils down to the rigorous computation of eigenfunctions of the Kolmogorov operators describing quasi-stationary and quasi-ergodic distributions of the underlying stochastic process. The proof technique may be used more generally for rigorously computing quasi-stationary distributions.

Andrea Ghiglietti: Asymptotics of Reinforced Stochastic Processes with a network-based interaction

We investigate the asymptotic dynamics of systems of interacting reinforced stochastic processes (RSPs) X^j = (X_{n,j})_n, in which the interaction is modeled by a finite weighted direct graph. These processes, located at the vertices of the graph, can be interpreted as the sequence of “actions” adopted by the agents of the network. To highlight the importance of recent experience in reinforced learning, we study also the dynamics of the empirical means N_{n,j} = \sum_{k=1}^n X_{k,j}/n, and especially the weighted empirical means N_{n,j} = \sum_{k=1}^n kX_{k,j} of such RSPs. For graphs with irreducible adjacency matrices, the entire class of interacting RSPs, along with their means, are proved to show the synchronization phenomenon i.e. converge almost surely towards the same common limit random variable. Although the distribution of such limit variable is unknown, we prove its non-atomic nature within the domain and we characterize its probability to lie within the domain or to touch the barriers. We are also able to provide Central Limit Theorems in the sense of stable convergence that establish the convergence rates and the asymptotic distributions for both convergence to the common limit and synchronization. These second-order asymptotic behaviour of the system strongly depends on the topology of the network of the interactions. Indeed, these results explicitly show how the convergence rates and the asymptotic variances are determined by the strength of the reinforcement mechanism and the eigen-structure of the weighted adjacency matrix. These theoretical results allow the construction of confidence intervals for the common limit random variable and critical regions for the inference on the topology of the interaction network.

Emma Horton: A binary branching model with Moran-type interactions

Branching processes naturally arise as pertinent models in a variety of applications such as population size dynamics, neutron transport and cell proliferation kinetics. A key result for understanding the behaviour of such systems is the Perron Frobenius decomposition, which allows one to characterise the large time average behaviour of the branching process via its leading eigenvalue and corresponding left and right eigenfunctions. However, obtaining estimates of these quantities can be challenging, for example when the branching process is spatially dependent with inhomogeneous rates. In this talk, I will introduce a new interacting particle model that combines the natural branching behaviour of the underlying process with a selection and resampling mechanism, which allows one to maintain some control over the system and more efficiently estimate the eigenelements. I will then present the main result, which provides an explicit relation between the particle system and the branching process via a many-to-one formula and also quantifies the L^2 distance between the occupation measures of the two systems. Finally, I will discuss some examples in order to illustrate the scope and possible extensions of the model, and to provide some comparisons with the Fleming Viot interacting particle system. This is based on ongoing work with Alex Cox (University of Bath) and Denis Villemonais (Université de Lorraine).

Mathieu Jonckheere: Persistence phenomena for large biological neural networks

We study a biological neural network model driven by inhomogeneous Poisson processes accounting for the intrinsic randomness of biological mechanisms. We focus here on local interactions: upon firing, a given neuron increases the potential of a fixed number of randomly chosen neurons. We show a phase transition in terms of the stationary distribution of the limiting network. Whereas a finite network activity always vanishes in finite time, the infinite network might converge to either a trivial stationary measure or to a nontrivial one. This allows to model the biological phenomena of persistence: we prove that the network may retain neural activity for large times depending on certain global parameters describing the intensity of interconnection. We conjecture a connection with the quasi-stationary distribution of the finite network.

This is joint work with Maximiliano Altamirano, Roberto Cortez, and Lasse Leskela.

Oliver Kelsey-Tough: Criterion for L^{\infty} convergence to a quasi-stationary distribution

We introduce a criterion for convergence in the L^{\infty} norm as time goes to infinity of the density of the distribution of a killed Markov process conditioned on survival with respect to its quasi-stationary distribution.

Andreas Kyprianou: Yaglom limits for general non-local Branching Markov Processes

The Yaglom limit for critical Galton-Watson processes is a well known result. In this talk we show that Yaglom limits are a universal property for general Branching Markov Processes, even with non-local branching mechanisms. We discuss in particular the setting of neutron transport.

This is joint work is based on several papers with Simon Harris, Emma Horton, Isaac Gonzalez, Denis Villemonais and Minmin Wang.

Tony Lelièvre: From Langevin dynamics to kinetic Monte Carlo: the quasi-stationary distribution approach

We will discuss models used in classical molecular dynamics, and some mathematical questions raised by their simulations. In particular, we will present recent results on the connection between a metastable Markov process with values in a continuous state space (satisfying e.g. the Langevin or overdamped Langevin equation) and a jump Markov process with values in a discrete state space. This is useful to analyze and justify numerical methods which use the jump Markov process underlying a metastable dynamics as a support to efficiently sample the state-to-state dynamics (accelerated dynamics techniques à la A.F. Voter). It also provides a mathematical framework to justify the use of transition state theory and the Eyring-Kramers formula to build kinetic Monte Carlo or Markov state models.

Pierre Monmarché: A coupling proof for the convergence of a particle approximation for QSDs

We consider a system of diffusion particles killed at some rate and resurrected according to the empirical measure of the system. As the simulation time and the number of particles go to infinity and the step-size of the numerical integration of the dynamics vanishes, the empirical measure of the system converges to the quasi-stationary distribution of the corresponding killed process. Using a probabilistic coupling argument, we obtain convergence estimates in each of these three parameters of interest which are independent from the others (in particular the long-time convergence is independent from the number of particles). This is a joint work with Lucas Journel.

William Oçafrain: Quasi-stationarity with moving boundaries.

In some biological models are involved diffusion processes absorbed by areas which can move over time. This talk aims to state some results linked with quasi-stationarity for general Markov processes absorbed by deterministically moving boundaries. In particular, under general criteria, a mixing property for the marginal laws conditioned to non-absorption holds true. We will also investigate conditions allowing the existence of some asymptotic notions such as the Q-process or the quasi-ergodic distribution. Some interesting examples will be dealt with.

The results which will be presented in this talk come from an ongoing joint work with Nicolas Champagnat and Denis Villemonais.

Guillermo Olicón Méndez: Quasi-stationary distributions in random maps with bounded noise: almost invariant sets and repellers

In this talk we consider one-dimensional dynamical systems with additive bounded noise, which depend on parameters. We assume that the system exhibits a topo- logical bifurcation of a minimal invariant set induced by a saddle-node bifurcation. This set then loses its invariance property, yet the orbits of the system remain in it for extremely long periods.

We prove that near the bifurcation point, there exists a unique quasi-stationary distribution. Furthermore, we give universal upper and lower bounds for the asymptotic behaviour of the survival rate, which depend only on the geometrical properties of the random map near the bifurcation point.

On a related topic, we address the problem on the existence of quasi-stationary distributions supported on repelling sets, where there are points which escape such set almost surely in one step. We show that despite this issue we can guarantee the existence and uniqueness of the quasi-stationary distribution supported on repelling sets.

This is joint work with Martin Rasmussen, Jeroen S. W. Lamb, and Matheus M. Castro (Imperial College London).

Guilherme Reis: The ant random walk

We propose a new model of random walk with reinforcement. Our goal is to observe the "Ant mill phenomenon''. In this phenomenon a group of army ants leave a strong pheromone track and by accident begin to follow one another, forming a continuously rotating circle. The ants are not able to go back home and will eventually die of exhaustion. We have a recording of this nature phenomenon in the following video "Why army ants get trapped in "death circles’": https://www.youtube.com/watch?v=LEKwQxO4EZU. In this talk we introduce the Ant Random Walk as a variation of the standard Edge Reinforced Random Walk. We prove that this random walk exhibits the same phenomenon as the ''Ant mill phenomenon'': eventually the walkers form groups and get trapped in disjoint circles, spinning there forever.

Based in joint works with Dirk Erhard and Tertuliano Franco.

Bruno Schapira: A probabilistic model for the formation of paths in ants colony

We will present two classes of probabilistic models based on reinforcement learning strategies for modeling the behavior of ants on their attempt to find the shortest way to go from the nest to the source of food. The general principle is that ants are modeled by a sequence of random walks evolving on a graph, all starting from a fixed vertex called Nest, until they reach another vertex called Food, and each reinforcing the weights of some of the edges on their range.

Our first class of models consist in reinforcing only a simple path, on the way back from the food to the nest, and proves to be efficient from the biological perspective, in the sense that asymptotically, only geodesic paths from the Nest to the Food survive in this process.

The second class of models consists in reinforcing all the edges on the ranges of each walk. This second class of models proves to be non-efficient from a biological perspective (as predicted by biologists), but still posseses remarkable properties from the probabilistic point of view. Indeed it is one of the very few models with a linear reinforcement mechanism, for which deterministic limits show up asymptotically. The proofs of our results use techniques from stochastic algorithms and urn models.

Based on joint works with Daniel Kious and Cécile Mailler.

David Steinsaltz: Quasistationarity in models of biological populations: a review, and some open problems

The concept of quasistationary distributions, and convergence to quasistationarity, has from its initial conception by Yaglom been closely related to models of biological populations. In recent decades the theory has been spurred on by the demands of a range of novel applications, seeking to model population senescence and mortality, evolution of age-structured populations, ecological survival and extinction, and the progression of new pathogen variants in an epidemic. These have often required the extension of existing results to novel spaces — more continuous spaces, higher-dimensional spaces, more complexly structured spaces, or spaces lacking compactness constraints. This talk will review some of the existing models, including some connections between superficially very different domains, and explore some of the natural questions that arise from these models for which we currently have no good technical solutions.

Denis Villemonais: Fluctuations of balanced urns with infinitely many colours

Measure-valued Pólya processes are a natural extension of Pólya urns to infinitely many colors situations. After introducing this object, I will present recent results describing the fluctuations of these urns around their long time equilibrium. In particular, the convergence speed and the limit law depend strongly on the spectral theory of the mean replacement kernel. In passing, I will show how the strong Feller property and Lyapunov type estimates can be used to prove the quasi-compactness of linear operators, with applications to quasi-stationary theory. The talk is based on a recent collaboration with Cécile Mailler and Svante Janson.

Andi Wang: Quasi-stationary Monte Carlo methods via stochastic approximation

Quasi-stationary distributions, although originating in probability theory to study population dynamics, have recently formed the basis of a new class of Monte Carlo methods known as quasi-stationary Monte Carlo methods (QSMC), which are designed to perform exact Bayesian inference for large data sets. The general idea is to simulate a killed diffusion system whose quasi-stationary distribution coincides with the posterior distribution of interest, with an aim of sampling approximately from the quasi-stationary distribution. In my talk I will introduce this class of QSMC methods, and then focus on a stochastic approximation approach to simulating the killed system and discuss recent developments.

Guo-Jhen Wu: Quasi-stationary distributions and ergodic control problems

We introduce two ergodic control problems that can be used to analyze the quasi-stationary distributions (QSDs) associated with a diffusion process. The two problems are in some sense dual, with one defined in terms of the generator associated with the diffusion process and the other in terms of its adjoint. The first ergodic control problem can be used to characterize the Q-process associated with the QSD, and the cost potential of the second ergodic control problem to characterize the QSD itself. We briefly mention how the control problems can be used to construct numerical approximations to the QSD.

This is a joint work with Amarjit Budhiraja (University of North Carolina at Chapel Hill), Paul Dupuis (Brown University), and Pierre Nyquist (KTH).