15 December 2023 — Rooms R02-R05 — New Orleans Convention Center
Heavy Tails in ML
Structure, Stability, Dynamics
a NeurIPS 2023 Workshop
Heavy-tails and chaotic behavior naturally appear in many ways in ML.
Heavy-tails and chaotic behavior naturally appear in many ways in ML.
We aim to create an environment to study how they emerge and how they affect the performance of ML algorithms.
We aim to create an environment to study how they emerge and how they affect the performance of ML algorithms.
Description
Description
Heavy-tailed distributions likely produce observations that can be very large in magnitude and far from the mean; hence, they are often used for modeling phenomena that exhibit outliers. As a consequence, the machine learning and statistics communities often associate heavy-tailed behaviors with rather negative consequences, such as creating outliers or numerical instability.
Heavy-tailed distributions likely produce observations that can be very large in magnitude and far from the mean; hence, they are often used for modeling phenomena that exhibit outliers. As a consequence, the machine learning and statistics communities often associate heavy-tailed behaviors with rather negative consequences, such as creating outliers or numerical instability.
Despite their ‘daunting’ connotation, heavy tails are ubiquitous in virtually any domain: many natural systems have been indeed identified as heavy-tailed, and it has been shown that their heavy-tailed behavior is the main feature that determines their characteristics.
Despite their ‘daunting’ connotation, heavy tails are ubiquitous in virtually any domain: many natural systems have been indeed identified as heavy-tailed, and it has been shown that their heavy-tailed behavior is the main feature that determines their characteristics.
In the context of machine learning, recent studies have shown that heavy tails also naturally emerge in ML training in various ways, and, contrary to their perceived image, they can be in fact beneficial for the performance of an ML algorithm.
In the context of machine learning, recent studies have shown that heavy tails also naturally emerge in ML training in various ways, and, contrary to their perceived image, they can be in fact beneficial for the performance of an ML algorithm.
The ultimate goal of this workshop is to foster research and exchange of ideas at the intersection of applied probability, theory of dynamical systems, optimization and theoretical machine learning to make progress on practical problems where heavy tails, stability, or topological properties of optimization algorithms play an important role, e.g., in understanding learning dynamics.
The ultimate goal of this workshop is to foster research and exchange of ideas at the intersection of applied probability, theory of dynamical systems, optimization and theoretical machine learning to make progress on practical problems where heavy tails, stability, or topological properties of optimization algorithms play an important role, e.g., in understanding learning dynamics.
In our community, the emergence of heavy tails (and the edge of stability) is often perceived as a ‘phenomenon’, which essentially implies that they are rather ‘surprising’ or even ‘counterintuitive’. We aim to break this perception and establish that such behaviors are indeed expected and the theory and methodology should be re-positioned accordingly.
In our community, the emergence of heavy tails (and the edge of stability) is often perceived as a ‘phenomenon’, which essentially implies that they are rather ‘surprising’ or even ‘counterintuitive’. We aim to break this perception and establish that such behaviors are indeed expected and the theory and methodology should be re-positioned accordingly.
Topics
Topics
- Heavy tails in stochastic optimization
- Edge of stability
- Empirical scaling laws in large models
- Heavy-tailed auto-correlation
- Iterated function systems
- Heavy-tailed continuous dynamical systems
- Power-laws in ML
- Heavy tails and generalization
Accepted Papers
Accepted Papers
Schedule
Schedule
Time Event Title
Time Event Title
09:00 - 09:01 Opening remarks --
09:00 - 09:01 Opening remarks --
09:00 - 10:00 Keynote: Adam Wierman An Introduction to Heavy Tails for ML Researchers: Conspiracies, Catastrophes, and the Principle of a Single Big Jump
09:00 - 10:00 Keynote: Adam Wierman An Introduction to Heavy Tails for ML Researchers: Conspiracies, Catastrophes, and the Principle of a Single Big Jump
10:00 - 10:30 Coffee break --
10:00 - 10:30 Coffee break --
10:30 - 11:10 Invited talk: Liam Hodgkinson Overparameterization and the Power Law Paradigm
10:30 - 11:10 Invited talk: Liam Hodgkinson Overparameterization and the Power Law Paradigm
11:10 - 11:35 Contributed talk: Vivien Cabannes Associative Memories with Heavy-Tailed Data
11:10 - 11:35 Contributed talk: Vivien Cabannes Associative Memories with Heavy-Tailed Data
11:35 - 12:00 Contributed talk: Dominique Perrault-Joncas Meta-Analysis of Randomized Experiments
11:35 - 12:00 Contributed talk: Dominique Perrault-Joncas Meta-Analysis of Randomized Experiments
12:00 - 14:00 Lunch break --
12:00 - 14:00 Lunch break --
14:00 - 14:40 Invited talk: Nisha Chandramoorthy A dynamical View of Learners, Samplers and Forecasters
14:00 - 14:40 Invited talk: Nisha Chandramoorthy A dynamical View of Learners, Samplers and Forecasters
14:40 - 15:05 Contributed talk: Jeremy Cohen Adaptive Gradient Methods at the Edge of Stability
14:40 - 15:05 Contributed talk: Jeremy Cohen Adaptive Gradient Methods at the Edge of Stability
15:05 - 15:30 Coffee break --
15:05 - 15:30 Coffee break --
15:30 - 16:10 Invited talk: Charles Martin Heavy-Tailed Self -Self-Regularization in DNNs
15:30 - 16:10 Invited talk: Charles Martin Heavy-Tailed Self -Self-Regularization in DNNs
Important Dates & Submission Info
Important Dates & Submission Info
We encourage submission of research papers in the above topic areas. Papers will be presented as posters at the workshop, with some papers being selected as talks.
We encourage submission of research papers in the above topic areas. Papers will be presented as posters at the workshop, with some papers being selected as talks.
- Submission deadline: October 6th 2023 (Anywhere on Earth)
- Notification of acceptance: October 20th, 2023 (Anywhere on Earth)
- Camera-ready papers: November 3rd, 2023 (Anywhere on Earth)
Submission format: NeurIPS style, no more than 6 pages main content (not including references and supplementary material)
Submission format: NeurIPS style, no more than 6 pages main content (not including references and supplementary material)
Contact: heavytails_ml_2023@googlegroups.com
Contact: heavytails_ml_2023@googlegroups.com
Invited Speakers
Invited Speakers
Adam Wierman
Adam Wierman
Caltech
Nisha Chandramoorthy
Nisha Chandramoorthy
Georgia Tech
Charles H. Martin
Charles H. Martin
Calculation Consulting
Liam Hodgkinson
Liam Hodgkinson
University of Melbourne
Organizers
Organizers
Mert Gürbüzbalaban
Mert Gürbüzbalaban
Rutgers
Stefanie Jegelka
Stefanie Jegelka
MIT
Michael Mahoney
Michael Mahoney
Berkeley
Umut Şimşekli
Umut Şimşekli
Inria Paris / ENS
Venue
Venue
New Orleans Ernest N. Morial Convention Center
New Orleans Ernest N. Morial Convention Center
Rooms R02-R05
Rooms R02-R05