Extreme Value Statistics and Applications to the Environment
Jean-Noël Bacro & Gwladys Toulemonde (Montpellier, FRANCE) COURS1 COURS2 COURS3
Course Summary: Spatial modeling of extremes of spatial processes, such as environmental processes, is fundamental for a proper understanding of underlying phenomena and potential consequences related to their occurrences. The course will present different approaches used in statistical modeling of spatial extremes, focusing on max-stable processes and models for asymptotically independent processes. Concepts of dependence and asymptotic independence will be explained, and statistical tools used to characterize them will be detailed. Possible extensions to spatio-temporal frameworks will be presented.
Piecewise Deterministic Markov Processes: Modeling and Simulation
Benoîte De Saporta (University of Montpellier, FRANCE) COURS
Course Summary: When a phenomenon is too complex to be modeled by a deterministic dynamical system of reasonable dimension, systems that involve multiple regimes can be used. Each regime has a controlled complexity, and the system is allowed to transition from one regime to another randomly and at random times. These hybrid stochastic systems form the class of Piecewise Deterministic Markov Processes (PDMPs). They have many advantages, such as a flexible and easily interpretable parametric form, and have been the subject of many recent developments related to their simulation and optimization.
Sampling and Optimization
Adil Salim (Microsoft Research, USA) Slide 1 Slide2
Course Summary: Sampling from a target probability distribution whose density is known only up to a normalizing constant is a fundamental problem in statistics and machine learning. While the optimization literature for machine learning has expanded significantly over the last decade, with high convergence rates for some methods, the literature on sampling has remained mainly asymptotic until very recently. Since then, the machine learning community has become increasingly interested in the non-asymptotic analysis of sampling algorithms or in designing new schemes to improve the sampling complexity. It is interesting to note that approximating a target probability distribution can be considered as an optimization problem where the objective function measures dissimilarity to the target distribution. In particular, the Kullback-Leibler divergence (or relative entropy) from the target distribution is a suitable objective function when the normalizing constant is infeasible, as is often the case in Bayesian inference. This optimization problem can be addressed using optimization techniques on a space of probability measures. Wasserstein gradient flows provide tools to solve this optimization problem. Indeed, Wasserstein gradient flows are continuous paths of distributions that decrease the objective functional. Moreover, several sampling algorithms such as Langevin Monte Carlo or Stein Variational Gradient Descent can be seen as discretizations of a Wasserstein gradient flow. In this tutorial, we will show how one can leverage optimization techniques to design and analyze sampling algorithms. We will first review fundamental optimization concepts such as Euclidean gradient flows (i.e., continuous-time gradient descent) before introducing their optimal transport analogs such as Wasserstein gradient flows. Then, we will present an optimization perspective on standard and new sampling algorithms and explain how this perspective has led to new convergence results and the design of new schemes.
Machine Learning with a practical Deep Learning hands-on
Ismaila Seck (Lengo, Dakar, SENEGAL) https://www.kaggle.com/learn
Course Summary: Machine learning research with a practical Deep Learning.
Practical sessions