Theoretical Physics

for Deep Learning

ICML 2019 Workshop, June 14th 2019, Long Beach, CA


Though the purview of physics is broad and includes many loosely connected subdisciplines, a unifying theme is the endeavor to provide concise, quantitative, and predictive descriptions of the often large and complex systems governing phenomena that occur in the natural world. While one could debate how closely deep learning is connected to the natural world, it is undeniably the case that deep learning systems are large and complex; as such, it is reasonable to consider whether the rich body of ideas and powerful tools from theoretical physicists could be harnessed to improve our understanding of deep learning. The goal of this workshop is to investigate this question by bringing together experts in theoretical physics and deep learning in order to stimulate interaction and to begin exploring how theoretical physics can shed light on the theory of deep learning.

We invite the submission of papers (Call for Papers) on topics including, but not limited to:

    • Theoretical understanding of deep learning models and learning algorithms
    • Learning dynamics of (stochastic) gradient descent
    • Stochastic neural networks and random matrix theory
    • Symmetry, transformations, and equivariance
    • High-dimensional loss landscapes
    • Applications to real-world problems

Invited Speakers

Sanjeev Arora (Princeton)

Kyle Cranmer (NYU)

Michael Mahoney (Berkeley)

Andrea Montanari (Stanford)

Jascha Sohl-Dickstein (Google Brain)

Lenka Zdeborova (CEA/SACLAY)


Program Committee

Aitor Lewkowycz (Stanford), Alex Alemi (Google), Andrew Saxe (University of Oxford), Ben Adlam (Google), Boris Hanin (Texas A&M), Daniel Freeman (Google Brain), Dar Gilboa (Columbia University), David Schwab (ITS, CUNY Graduate Center), Eric Mintun (UBC), Ethan Dyer (Google), Felix Draxler (Heidelberg University), Guy Gur-Ari (Google), James Sully (UBC), Jascha Sohl-Dickstein (Google Brain), Joshua Batson (Chan Zuckerberg Biohub), Lechao Xiao (Google Brain), Michela Paganini (Facebook AI Research), Niru Maheswaranathan (Google Brain), Phan-Minh Nguyen (Stanford), Roi Livni (Tel Aviv University), Romain Couillet (CentralSupélec), Roman Novak (Google Brain), Samuel Schoenholz (Google Brain), Sho Yaida (Facebook AI Research), Siavash Golkar (NYU), Song Mei (Stanford), Sueyeon Chung (MIT), Taco Cohen (Qualcomm AI Research), Vasily Pestun (IHES)

Special thanks to Alex Alemi, Daniel Freeman, Samuel Schoenholz and Jascha Sohl-Dickstein for additional reviewer support.

For any questions contact: