Deadline for submission of papers: May 29, 2023, anywhere on earth (*)
Notification of acceptance: June 19, 2023
Camera-Ready Papers: July 21, 2023 (High-dimensional learning dynamics style file required) (**)
Workshop date (Friday, July 28, 2023)
(*) If you face severe difficulties meeting this deadline, please contact us before the deadline.
Submission of papers will be through CMT and limited to no more than 5 pages plus supplementary materials.
Talks will be in-person and live-streamed
Sanjeev Arora (Princeton)
SueYeon Chung (NYU)
Murat A. Erdogdu (Toronto)
Surya Ganguli (Stanford)
Andrea Montanari (Stanford)
Learning dynamics of machine learning algorithms, especially under the assumptions of high-dimensional datasets, is enjoying tremendous growth as a field in ML. It provides theoretical guarantees and tools to explain lots of ML phenomena seen in practical applications. Tools from stochastic differential equations (SDEs), high-dimensional probability, and random matrix theory are regularly employed as techniques to model high-dimensional data and trajectories of optimization algorithms. We aim to foster discussion, discovery, and dissemination of state-of-the-art research in high-dimensional learning dynamics relevant to ML.
We invite participation in the 1st International (in-person) Workshop on High-dimensional Learning Dynamics, to be held as a part of the ICML 2023 conference. We invite high quality submissions for presentations as contributed talks or poster presentations during the workshop. We are especially interested in participants who can contribute theory / algorithms, models, or applications with a machine learning focus on dynamics of algorithms and random matrix theory/high-dimensionality as applied to ML and encourage work-in-progress and state-of-art ideas.
All accepted contributions will be listed on the workshop webpage and are expected to be presented as a poster during the workshop. A few submissions will in addition be selected for contributed talks as part of the main sessions.
The main topics are, including, but not limited to
Modeling of high-dimensional datasets
Training and learning dynamics of algorithms
Random matrix theory applications in ML
SDEs and other continuous time limits of SGD
Modeling of loss landscapes
Universality concepts in ML
Average-case analysis of optimization algorithms
Mean field approximation regimes
Neural tangent kernel and beyond
Gradient flow of neural networks
Neural ODEs
Applications of high-dimensional statistics and random matrix theory in ML
Simple analyzable models for deep neural networks
Sketching, sampling, and general randomized methods in ML
Learning and information propagation on large (random) graphs
Scaling laws and its effect on generalization performance