Ryan P. Adams - Learning Space Group Invariant Functions
Alex Alemi - Inferential Engines
Yasaman Bahri
James Bradbury - Asymptotic scaling of chips and clusters for large model training
Miles Cranmer - Symbolic Distillation of Neural Networks
Dmitri Chklovskii - Harnessing Insights from Neuroscience for More Powerful and Efficient Machine Learning
David Dohan
Michael R. Douglas - Can a computer judge interestingness?
Boris Leonidovich Hanin - Bayesian Interpolation with Deep Linear Networks
Shirley Ho - Deep learning as a last resort
Andrey Gromov
Arthur Jacot-Guillarmod - Implicit Bias of Large Depth Networks: the Bottleneck Rank
Jared Kaplan - AI Safety and Self-Supervision
Julia Kempe - Towards Understanding Adversarial Robustness
Dima Krotov - Modern Hopfield Networks for Novel Transformer Architectures
Preetum Nakkiran - Calibration in Deep Learning: Theory and Practice
Zohar Ringel - Adaptive Kernel Approaches to Feature Learning in Deep Neural Networks
Inbar Seroussi - Thermodynamic Description of Feature Learning in Deep Neural Networks
Eva Silverstein - Energy-Conserving Hamiltonian Dynamics for Predictably Improved Optimization and Sampling
Sam Smith - Can numerical analysis explain the generalization benefit of SGD?
Jascha Sohl-Dickstein
Jamie Sully - A Solvable Model of Neural Scaling Laws
Josh Susskind and Etai Littwin - Towards Understanding the Implicit Biases of Adaptive Optimization
Jesse Thaler - Learning Uncertainties the Frequentist Way
Yuhuai Wu - From Minerva to Autoformalization: A Path Towards Math Intelligence
Greg Yang - The unreasonable effectiveness of mathematics in large scale deep learning