A new position paper argues that deep learning is quietly acquiring something it has long lacked: a real scientific theory. The authors identify five converging research threads — solvable toy models, tractable limits, scaling laws, theories of hyperparameters, and universal training behaviors — coalescing into what they call learning mechanics. The vision is a physics-style, first-principles theory that makes falsifiable, quantitative predictions about how neural networks train. Where mechanistic interpretability aims to be the biology of deep learning, learning mechanics aspires to be its physics.