The call for papers is now over
The increasing computational demands of modern ML create a critical challenge: thorough experimentation becomes prohibitively expensive precisely when we most need to understand and steer model behavior. Small-scale experiments offer a powerful approach for systematic investigation, enabling both scientific understanding and practical advances. Recent work demonstrates the endless opportunities at this scale, including:
diagnoses and mitigations of training pathologies;
minimalistic replications of modern pipelines;
elementary synthetic tasks that “stress test” architectures and motivate new designs; and
discovery of intriguing phenomena.
This workshop aims to highlight how methods and opportunities at small scale can unlock new insights and drive progress. The emphasis will be on advancing scientific understanding (and, optionally, its interplay with theory), without the need to improve state-of-the-art performance.
We invite submissions that demonstrate the potential of deriving insights from limited computational resources. Topics of interest include but are not limited to scientific and/or theoretical understanding of the following:
Inductive biases and generalization properties from dataset and training objectives.
Training dynamics of neural networks.
Inference behaviors and test-time computation.
Evaluation and stress tests.
Diagnosis of failure modes in current systems / paradigms.
Architectural / ecosystem design choices and innovations.
Mechanistic interpretability of trained models.
Societal impact and ethical considerations.
Novel methods for steering, fine-tuning, and/or aligning models.
Elementary synthetic tasks that recreate intriguing phenomena observed in large-scale models.
Algorithmic innovations in general.
We accept submissions in one of the following formats:
A 4-page single-column paper (with unlimited references and appendix), accompanied by a Jupyter notebook supporting the main claims in the paper.
A Jupyter notebook, accompanied by an optional 2-page writeup (with unlimited references and appendix) summarizing the main takeaways.
A 4-page single-column position / survey paper (with unlimited references and appendix).
We prioritize submissions that are reproducible with limited resources. Specifically, we strongly encourage submissions to include an anonymized Jupyter notebook that can be tested using Google Colab’s free tier within ~3 hours in the reviewing process. If there are reasons (e.g. data privacy) that prevents a Jupyter notebook from being submitted, please justify in the submission. The Jupyter notebook should be clearly documented and runnable given free-tier Colab. Please provide an environment specification if necessary. We give some example notebooks at this github link.
The reviewing process will be double-blind and all submissions must be anonymized. Please do not include author names, affiliations, acknowledgements, or any other identifying information in your submission. Submissions and reviews will not be made public.
All submissions must be made through OpenReview. Please, use the following style file.
Note: If you are creating a new OpenReview profile, we strongly recommend using your institutional email address. Profiles created without an institutional email may require a moderation process, which can take up to two weeks.
Timeline
Paper & notebook submission deadline: extended to May 26, 2025, 4:59pm PDT / 11:59pm UTC.
Review period: May 27 - June 5 (4:59pm PDT / 11:59pm UTC), 2025.
Notification date: June 9, 2025
Dual submission: This workshop is non-archival and will not have official proceedings. Workshop submissions can be submitted to other venues. We welcome ongoing and unpublished work, including papers that are under review at the time of submission. We do not accept submissions that have been accepted for publication in other venues with archival proceedings, with the only exception being ICML 2025 main conference papers.
Please find the relevant files for submission here: https://drive.google.com/drive/folders/1SyfMQYB_B1mgu5bavRCuGrto8kgkOWAh
Please refer to the FAQs for commonly asked questions about submissions.