Deadline for submission of papers: May 22, 2026, anywhere on earth (*)
Notification of acceptance: June 8, 2026
Camera-Ready Papers: July 3, 2026 (High-dimensional learning dynamics style file required) (**)
Workshop date: TBD
(*) If you face severe difficulties meeting this deadline, please contact us before the deadline.
Submission of papers will be through OpenReview and limited to no more than 5 pages plus supplementary materials.
We are not an archival proceeding. Check with the other journals/conferences, but this usually means you can submit to us and the journal/conference without violating dual submissions. For example, you may submit to us and NeurIPS without violating its dual submission policy.
All submissions must be anonymized and may not contain any identifying information that may violate the double-blind reviewing policy.
For accepted workshop posters, please adhere to the following:
Information as well as printing services available by ICML can be found here: https://icml.cc/Conferences/2025/PosterInstructions
Talks will be in-person and live-streamed
The unprecedented scale and complexity of modern neural networks have revealed emergent patterns in learning dynamics and scaling behaviors. Recent advances in analyzing high-dimensional systems have uncovered fundamental relationships between model size, data requirements, and computational resources while highlighting the intricate nature of optimization landscapes. This understanding has led to deeper insights into the architecture design, regularization, and the principles governing learning at scale.
We invite participation in the 4th Workshop on High-dimensional Learning Dynamics (HiLD) at ICML 2026. This year's theme, scaling laws, explores universalities and the scaling of large-scale machine learning models. We encourage submissions related to our theme, as well as other topics around the theoretical and empirical understanding of learning in high-dimensional spaces. We will accept high quality submissions for poster presentations during the workshop, especially work-in-progress and state-of-art ideas.
We welcome any topics that help us understand how model behaviors evolve at a large scale.
Example topics include, but are not limited to:
Compute-optimal scaling laws: theory and practice
Effect of optimizers, hyperparameters, and batch size on scaling exponents
Feature learning and representation in scaling regimes
Neural scaling laws from statistical mechanics and random matrix theory
Data-constrained and multi-epoch scaling
Architecture-dependent scaling (transformers, state-space models, mixture-of-experts)
High-dimensional limits of stochastic optimization algorithms
Mean-field and continuous-time limits of learning dynamics
Loss landscape geometry at scale (Hessian spectra, edge of stability)