Honolulu, Hawaii
Learning dynamics of machine learning (ML) algorithms, especially under the assumptions of high-dimensional datasets, is enjoying tremendous growth as a field in ML. It provides theoretical guarantees and tools to explain lots of ML phenomena seen in practical applications. Tools from stochastic differential equations (SDEs), high-dimensional probability, and random matrix theory (RMT) are regularly employed as techniques to model high-dimensional data and trajectories of optimization algorithms. The HLD workshop seeks to spur research and collaboration around:
Developing analyzable models and dynamics to explain observed deep neural network phenomena;
Creating mathematical frameworks for scaling limits of neural network dynamics as width and depth grow;
Provably explaining the role of the optimization algorithm, hyper-parameter choices, and neural network architectural choices on training/test dynamics;
High-dimensionality, where intuitions from low-dimensional geometry tend to lead to inaccurate (and often misleading) properties of the machine learning models on large real-world datasets.
The Workshop on High-dimensional Learning Dynamics aims to bring together experts from classical RMT, high-dimensional statistics/probability, and SDEs to share their perspectives on dynamics of ML algorithms while leveraging crossover experts in ML. It seeks to create synergies between these two groups which often do not interact. Through a series of talks, the workshop will tackle questions on dynamics of algorithms at the interface of RMT, high-dimensional statistics, SDEs, and ML.
Include but not limited to:
Training and test dynamics of learning algorithms
Mean field approximation regimes
Neural tangent kernel and beyond
Neural ODEs
SDE and other continuous time limits of SGD
Loss landscape of neural networks (e.g., spectrum of the Hessian, double descent)
Credit: Yuichiro Chino
Average-case analysis of learning algorithms
Sketching, sampling, and general randomized methods in ML
Learning and information propagation on large (random) graphs
Scaling laws and its effect on generalization performance
Deadline for submission of papers: May 29, 2023, anywhere on earth (*)
Notification of acceptance: June 19, 2023
Camera-Ready Papers: July 21, 2023 (High-dimensional learning dynamics (HiLD) style file required) (**)
Workshop date Friday, July 28, 2023
(*) If you face severe difficulties meeting this deadline, please contact us before the deadline.
Submission of papers will be through CMT and limited to no more than 5 pages plus supplementary materials.
Credit: Martin Barlow
Credit: Yuichiro Chino