3rd Workshop on High-dimensional Learning Dynamics (HiLD)
TBD, ICML 2025
TBD, ICML 2025
Vancouver, BC, Canada
Description
The unprecedented scale and complexity of modern neural networks have revealed emergent patterns in learning dynamics and scaling behaviors. Recent advances in analyzing high-dimensional systems have uncovered fundamental relationships between model size, data requirements, and computational resources while highlighting the intricate nature of optimization landscapes. This understanding has led to deeper insights into the architecture design, regularization, and the principles governing neural learning at scale. The HiLD workshop seeks to spur research and collaboration around:
Developing analyzable models and dynamics to explain observed deep neural network phenomena;
Competition and dependencies among structures and heuristics, e.g., simplicity bias or learning staircase functions;
Creating mathematical frameworks for scaling limits of neural network dynamics as width and depth grow;
Provably explaining the role of the optimization algorithm, hyper-parameter choices, and neural network architectural choices on training/test dynamics;
Relating optimizer design and loss landscape geometry to implicit regularization, inductive bias, and generalization;
High-dimensionality, where intuitions from low-dimensional geometry tend to lead to inaccurate (and often misleading) properties of the machine learning models on large real-world datasets;
Connecting model architectures and data distributions to generalization, memorization, and forgetting.
This year, the 3rd Workshop on High-dimensional Learning Dynamics theme is on Navigating Complexity: Feature Learning Dynamics at Scale. We aim to foster discussion, discovery, and dissemination of state-of-the-art research in high-dimensional learning dynamics relevant to ML and bring together experts from all parts of ML: theorists to empirical scientists. It seeks to create synergies between these two groups, which often do not interact. Through a series of talks, the workshop will tackle questions on high-dimensionality in ML.
Topics
Include but not limited to:
The emergence of interpretable behaviors (e.g., circuit mechanisms) and capabilities (e.g., compositionality and reasoning)
Work that adapts tools from stochastic differential equations, high-dimensional probability, random matrix theory, and other theoretical frameworks to understand learning dynamics and phase transitions
Scaling laws related to internal structures and functional differences
Credit: Yuichiro Chino
Competition and dependencies among structures and heuristics, e.g., simplicity bias or learning staircase functions
Relating optimizer design and loss landscape geometry to implicit regularization, inductive bias, and generalization
Notification Timeline & Deadline
Deadline for submission of papers: May 21, 2025, anywhere on earth (*)
Notification of acceptance: June 9, 2025
Camera-Ready Papers: July 11, 2025 (High-dimensional learning dynamics (HiLD) style file required) (**)
Workshop date: TBD
(*) If you face severe difficulties meeting this deadline, please contact us before the deadline.
Submission of papers will be through OpenReview and limited to no more than 5 pages plus supplementary materials.
All submissions must be anonymized and may not contain any identifying information that may violate the double-blind reviewing policy.
For accepted workshop posters, please adhere to the following:
Dimensions 36 in (H) x 24 in (W) or 91 cm (H) x 61 cm (W); this differs from the Main Conference
Portrait format
Information as well as printing services available by ICML can be found here: https://icml.cc/Conferences/2024/PosterInstructions
Stochastic Differential Equations (SDEs) & ML
Credit: Martin Barlow
Loss Landscapes of Neural Networks
Random Matrix Theory (RMT) & ML
Credit: Yuichiro Chino