15 December 2023 — Rooms R02-R05 New Orleans Convention Center

Heavy Tails in ML

Structure, Stability, Dynamics

a NeurIPS 2023 Workshop

Heavy-tails and chaotic behavior naturally appear in many ways in ML. 

We aim to create an environment to study how they emerge and how they affect the performance of ML algorithms.

Description

Heavy-tailed distributions likely produce observations that can be very large in magnitude and far from the mean; hence, they are often used for modeling phenomena that exhibit outliers. As a consequence, the machine learning and statistics communities often associate heavy-tailed behaviors with rather negative consequences, such as creating outliers or numerical instability. 

Despite their ‘daunting’ connotation, heavy tails are ubiquitous in virtually any domain: many natural systems have been indeed identified as heavy-tailed, and it has been shown that their heavy-tailed behavior is the main feature that determines their characteristics. 

In the context of machine learning, recent studies have shown that heavy tails also naturally emerge in ML training in various ways, and, contrary to their perceived image, they can be in fact beneficial for the performance of an ML algorithm.

 The ultimate goal of this workshop is to foster research and exchange of ideas at the intersection of applied probability, theory of dynamical systems, optimization and theoretical machine learning to make progress on practical problems where heavy tails, stability, or topological properties of optimization algorithms play an important role, e.g., in understanding learning dynamics. 

In our community, the emergence of heavy tails (and the edge of stability) is often perceived as a ‘phenomenon’, which essentially implies that they are rather ‘surprising’ or even ‘counterintuitive’. We aim to break this perception and establish that such behaviors are indeed expected and the theory and methodology should be re-positioned accordingly. 

Topics

Accepted Papers

The list of accepted papers can be accessed via this link.

Schedule

Time Event Title

09:00 - 09:01 Opening remarks --

09:00 - 10:00 Keynote:  Adam Wierman An Introduction to Heavy Tails for ML Researchers: Conspiracies, Catastrophes, and the Principle of a Single Big Jump

10:00 - 10:30 Coffee break --

10:30 - 11:10 Invited talk: Liam Hodgkinson Overparameterization and the Power Law Paradigm

11:10 - 11:35 Contributed talk: Vivien Cabannes Associative Memories with Heavy-Tailed Data 

11:35 - 12:00 Contributed talk: Dominique Perrault-Joncas Meta-Analysis of Randomized Experiments

12:00 - 14:00 Lunch break --

14:00 - 14:40 Invited talk: Nisha Chandramoorthy A dynamical View of Learners, Samplers and Forecasters

14:40 - 15:05 Contributed talk: Jeremy Cohen Adaptive Gradient Methods at the Edge of Stability

15:05 - 15:30 Coffee break --

15:30 - 16:10 Invited talk: Charles Martin Heavy-Tailed Self -Self-Regularization in DNNs

16:10 - 17:30 Poster session (See the accepted papers)

 Important Dates & Submission Info

We encourage submission of research papers in the above topic areas. Papers will be presented as posters at the workshop, with some papers being selected as talks.


Submission format: NeurIPS style, no more than 6 pages main content (not including references and supplementary material)

Contact: heavytails_ml_2023@googlegroups.com

Invited Speakers

Adam Wierman

Caltech

Nisha Chandramoorthy

Georgia Tech

Charles H. Martin

Calculation Consulting

Liam Hodgkinson

University of Melbourne

Organizers

Mert Gürbüzbalaban

Rutgers

Stefanie Jegelka

MIT

Michael Mahoney

Berkeley

Umut Şimşekli

Inria Paris / ENS

Venue

New Orleans Ernest N. Morial Convention Center


Rooms R02-R05