Research Scientist at Google DeepMind
Member, Simons Collaboration on the Physics of Learning & Neural Computation
Research Scientist at Google DeepMind
Member, Simons Collaboration on the Physics of Learning & Neural Computation
I am a theoretical & computational scientist working at the intersection of physics and machine learning. In one direction, my research focuses on the physics of learning & computation, investigating the fundamental principles behind neural systems and artificial intelligence. In a second direction, my research develops ideas and models from statistical learning and AI to advance the physics of materials.
Research interests: theory of neural networks, foundations of AI, representation learning, statistical learning for materials, the physics of quantum materials
Press: Simons Collaboration, Quanta magazine
Contact: yasamanbahri@gmail.com
Our paper on how analogical reasoning emerges from correlations in natural language is now published in NeurIPS 2025.
Our paper on analytically tractable learning dynamics for word embeddings models is now published in NeurIPS 2025.
We were awarded a Simons Collaboration on the Physics of Learning & Neural Computation.
I gave an invited talk at ICML 2025 in the workshop on high-dimensional learning dynamics.
I gave an invited talk at APS March Meeting 2025.
Our paper on using large language models to do quantum many-body physics calculations is now published in Communications Physics.
Lectures I gave at the Les Houches School of Physics have been published in the Journal of Statistical Mechanics: Theory & Experiment.
Academic Bio I was trained as a theoretical quantum condensed matter physicist, and I received my Ph.D. in Physics from UC Berkeley. My graduate work is in the field of quantum many-body theory and strongly correlated physics, and I was fortunate to have Professor Ashvin Vishwanath as my doctoral advisor. Prior to that, I got my B.A. in Physics & Mathematics from UC Berkeley and completed an undergraduate thesis in condensed matter theory with Professor Joel Moore.
On the Emergence of Linear Analogies in Word Embeddings
D. Korchinski, D. Karkada, Y. Bahri, M. Wyart
NeurIPS 2025 (Neural Information Processing Systems)
Closed-form Training Dynamics Reveal Learned Features and Linear Structure in Word2Vec-like Models
D. Karkada, J. Simon, Y. Bahri, M. DeWeese
NeurIPS 2025
Quantum Many-Body Physics Calculations with Large Language Models
H. Pan, N. Mudur, W. Taranto, M. Tikhanovskaya, S. Venugopalan, Y. Bahri, M. Brenner, E. Kim
Communications Physics 8: 49 (2025)
Explaining Neural Scaling Laws
Y. Bahri*, E. Dyer*, J. Kaplan*, J. Lee*, U. Sharma*
PNAS 121 (27) e2311878121 (2024)
Les Houches Lectures on Deep Learning at Large and Infinite Width
Y. Bahri, B. Hanin, A. Brossollet, V. Erba, C. Keup, R. Pacelli, J. Simon
Journal of Stat. Mech.: Theory & Experiment 104012 (2024)
Statistical Mechanics of Deep Learning
Y. Bahri, J. Kadmon, J. Pennington, S.S. Schoenholz, J. Sohl-Dickstein, S. Ganguli
Annual Review of Condensed Matter Physics (2020)
Deep Neural Networks as Gaussian Processes
J. Lee*, Y. Bahri*, R. Novak, S. S. Schoenholz, J. Pennington, J. Sohl-Dickstein
ICLR 2018 (International Conference on Learning Representations)
*Denotes equal contribution