Deadline for submission of papers: May 21, 2025, anywhere on earth (*)
Notification of acceptance: June 9, 2025
Camera-Ready Papers: July 11, 2025 (High-dimensional learning dynamics style file required) (**)
Workshop date: TBD
(*) If you face severe difficulties meeting this deadline, please contact us before the deadline.
Submission of papers will be through OpenReview and limited to no more than 5 pages plus supplementary materials.
We are not an archival proceeding. Check with the other journals/conferences, but this usually means you can submit to us and the journal/conference without violating dual submissions. For example, you may submit to us and NeurIPS without violating its dual submission policy.
All submissions must be anonymized and may not contain any identifying information that may violate the double-blind reviewing policy.
For accepted workshop posters, please adhere to the following:
Information as well as printing services available by ICML can be found here: https://icml.cc/Conferences/2025/PosterInstructions
Talks will be in-person and live-streamed
The unprecedented scale and complexity of modern neural networks have revealed emergent patterns in learning dynamics and scaling behaviors. Recent advances in analyzing high-dimensional systems have uncovered fundamental relationships between model size, data requirements, and computational resources while highlighting the intricate nature of optimization landscapes. This understanding has led to deeper insights into the architecture design, regularization, and the principles governing neural learning at scale.
We invite participation in the 3rd Workshop on High-dimensional Learning Dynamics (HiLD) at ICML 2025. This year's theme, Navigating Complexity: Feature Learning Dynamics at Scale, explores how neural networks develop and organize representations across training of large-scale models. We encourage submissions related to our theme, as well as other topics around the theoretical and empirical understanding of learning in high-dimensional spaces. We will accept high quality submissions for poster presentations during the workshop, especially work-in-progress and state-of-art ideas.
We welcome any topics in pursuit of understanding how model behaviors evolve or emerge.
Example topics include but are not limited to:
The emergence of interpretable behaviors (e.g., circuit mechanisms) and capabilities (e.g., compositionality and reasoning)
Work that adapts tools from stochastic differential equations, high-dimensional probability, random matrix theory, and other theoretical frameworks to understand learning dynamics and phase transitions
Scaling laws related to internal structures and functional differences
Competition and dependencies among structures and heuristics, e.g., simplicity bias or learning staircase functions
Relating optimizer design and loss landscape geometry to implicit regularization, inductive bias, and generalization
Modeling of high-dimensional datasets
Modeling of loss landscapes and simple analyzable models for deep neural networks
Average-case analysis of optimization algorithms
Mean field approximation regimes, neural tangent kernel, and beyond