A public ELLIS reading group exploring the interplay between the mathematical foundations of deep learning and the practical challenge of making ML efficient — from optimization theory to hardware-aware training.
16. March 2026 @ 5pm CET / 11am EST / 8am PST [timezone converter]
Procedural Pretraining: Warming Up Language Models with Abstract Data
Liangze Jiang, EPFL and Idiap Research Institute, Switzerland
Zachary Shinnick, Australian Institute for Machine Learning (AIML), Adelaide University, Australia
Abstract: Pretraining directly on web-scale corpora is the de facto paradigm for building language models. We study an alternative setting where the model is initially exposed to abstract structured data, as a means to ease the subsequent acquisition of rich semantic knowledge, much like humans learn simple logic and mathematics before higher reasoning. We specifically focus on procedural data, generated by formal languages and other simple algorithms, as such abstract data. We first diagnose the algorithmic skills that different forms of procedural data can improve, often significantly. For example, on context recall (NEEDLE-IN-A-HAYSTACK), the accuracy jumps from 10 to 98% when pretraining on Dyck sequences (balanced brackets). Second, we study how these gains are reflected in pretraining larger models (up to 1.3B). We find that front-loading as little as 0.1% procedural data significantly outperforms standard pretraining on natural language, code, and informal mathematics (C4, CODEPARROT, and DEEPMIND-MATH datasets). Notably, this procedural pretraining enables the models to reach the same loss value with only 55 / 67 / 86% of the original data. Third, we explore the mechanisms behind and find that procedural pretraining instils non-trivial structure in both attention and MLP layers. The former is particularly important for structured domains (e.g. code), and the latter for language. Finally, we lay a path for combining multiple forms of procedural data. Our results indicate that procedural pretraining is a remarkably simple, lightweight means to improving performance and accelerating language model pretraining. This ultimately suggests the promise of disentangling knowledge acquisition from reasoning in LLMs.
11. May 2026 @ 5pm CEST / 11am EST / 8am PST [timezone converter]
Finite-Time Lyapunov Exponents of Deep Neural Networks
Bernhard Mehlig, Department of Physics, University of Gothenburg, Sweden
Abstract: We compute how small input perturbations affect the output of deep neural networks, exploring an analogy between deep feed-forward networks and dynamical systems, where the growth or decay of local perturbations is characterized by finite-time Lyapunov exponents. We show that the maximal exponent forms geometrical structures in input space, akin to coherent structures in dynamical systems. Ridges of large positive exponents divide input space into different regions that the network associates with different classes. These ridges visualize the geometry that deep networks construct in input space, shedding light on the fundamental mechanisms underlying their learning capabilities.
9. March 2026 @ 5pm CET — ▶️ YouTube
How Does Sharpness-Aware Minimization Minimize Sharpness?
Kaiyue Wen, Stanford University, USA
arXiv: https://arxiv.org/abs/2211.05729
2. March 2026 @ 5pm CET — ▶️ YouTube
When Flatness Does (Not) Guarantee Adversarial Robustness
Nils Philipp Walter, CISPA Helmholtz Center for Information Security, Germany
arXiv: https://arxiv.org/pdf/2510.14231
9. February 2026 @ 5pm CET — ▶️ YouTube
Saddle-to-Saddle Dynamics Explains A Simplicity Bias Across Neural Network Architectures
Yedi Zhang, Gatsby Computational Neuroscience Unit, University College London, UK
arXiv: https://arxiv.org/pdf/2512.20607
The paper on Muon Yedi mentioned in the talk is now on arXiv: https://arxiv.org/abs/2603.00742
19. January 2026 @ 5pm CET — ▶️ YouTube
Fast Video Generation (multiple papers)
Rahim Entezari, Wayve.ai
12. January 2026 @ 5pm CET — ▶️ YouTube
Flatness is Necessary, Neural Collapse is Not: Rethinking Generalization via Grokking
Ting Han, Lamarr Institute, TU Dortmund, Germany and Institute for AI in Medicine, UK Essen
OpenReview: https://openreview.net/pdf?id=lbtOctHDQ3
Contact us for questions or suggestions via efficientml@gmail.com.
Self-nominations to present your published work in the reading group are welcome.