Artificial neural networks (ANNs) have emerged as a powerful tool in modern machine learning, yet their mathematical foundations remain only partially understood. A key challenge is the inherently stochastic nature of ANN training: optimization occurs in high-dimensional parameter spaces with complex loss landscapes, influenced by stochastic initialization and noisy gradient updates. Understanding these dynamics requires probabilistic methods and asymptotic frameworks.
This workshop will explore recent advances in stochastic training dynamics, emphasizing probabilistic techniques and limit theorems. By bringing together researchers from probability, optimization, and deep learning theory, we aim to foster discussions on emerging results and new directions in understanding neural network training from a stochastic perspective.
Each participant will give a 30-minute presentation. These talks are meant to introduce recent results, key concepts, ongoing work, heuristics, empirics, and new ideas.
We’ll have open slots for additional talks, to be scheduled during the week. These talks will build on themes and questions that emerge from the initial discussions. They may focus on specific proof techniques, highlight additional results, or explore open problems and new directions.
In problem-sessions and discussions we discuss the talks, open problems and further potential research projects.
Steffen Dereich (Münster)
Aymeric Dieuleveut (Palaiseau)
Sebastian Kassing (Berlin)
Sophie Langer (Bochum)