Instructor: Yizhe Zhu, yizhezhu@usc.edu
Class schedule: MWF 1:00-1:50 pm at GFS 112
Office Hours: Friday 12:00-12:50 pm KAP 464B
Course Description: This course provides an introduction to the fundamental methods, concepts, and tools of random matrix theory, with an emphasis on its modern applications in machine learning. We will explore both the asymptotic and non-asymptotic perspectives of random matrices and discuss how these theoretical foundations inform applications in network analysis, signal processing, and deep learning theory.
Homework: Homework will be posted on Brightspace
Course Schedule: Below is a tentative schedule, to be updated as the semester progresses.
Week 1
Jan 12: Wigner matrices, semicircle law, moment method proof
Jan 14: moment method proof continued
Jan 16: Marchenko-Pastur law
Week 2
Jan 19: No class
Jan 21: Marchenko-Pastur law, spectral norm bound via high moment method
Jan 23: high moment method, Stieltjes transform
Week 3
Jan 26: Stieltjes transform
Jan 28: Marchenko-Pastur law via Stieltjes transform
Jan 30: Semicircle law via Stieltjes transform, local law
Week 4
February 2: Eigenvector delocalization
February 4: Stochastic block model
February 6: Matrix Bernstein's inequality
Week 5
February 9: Davis-Kahan inequality
February 11: Davis-Kahan inequality
February 13: Detection threshold
Week 6
February 16: No class
February 18: Refinement of Davis-Kahan
February 20: Refinement of Davis-Kahan
Week 7
February 23: non-backtracking operator
February 25: non-backtracking operator
February 27: non-backtracking operator
Week 8
March 2: BBP phase transition
March 4: BBP phase transition
March 6: BBP phase transition
Week 9
March 9: Spiked matrix models
March 11:
March 13:
Week 10
March 16: No class
March 18: No class
March 20: No class
Week 11
March 23: Tensor completion
March 25: Tensor PCA
March 27: Tensor PCA
Week 12
March 30: the double descent
April 1: the double descent
April 3: benign overfitting
Week 13
April 6: benign overfitting
April 8: random feature models
April 10: random feature models
Week 14
April 13: neural tangent kernel
April 15: neural tangent kernel
April 17: precise asymptotics of generalization error
Week 15
April 20: precise asymptotics of generalization error
April 22: high-dimensional learning dynamics
April 24: high-dimensional learning dynamics
Week 16
April 27: Final presentation
April 29: Final presentation
May 1: Final presentation
A list of papers for the final project:
Concentration inequalities
Afonso Bandeira, Kevin Lucca, Petar Nizić-Nikolac, Ramon van Handel. Matrix Chaos Inequalities and Chaos of Combinatorial Type
March T. Boedihardjo. Injective norm of random tensors with independent entries
Danil Akhtiamov, David Bosch, Reza Ghane, K Nithin Varma, Babak Hassibi. A novel Gaussian min-max theorem and its applications
Aicke Hinrichs, David Krieg, Erich Novak, Joscha Prochno, Mario Ullrich. Random sections of ellipsoids and the power of random information
Afonso S. Bandeira, Giorgio Cipolloni, Dominik Schröder, Ramon van Handel. Matrix Concentration Inequalities and Free Probability II. Two-sided Bounds and Applications
Tatiana Brailovskaya, Ramon van Handel. Extremal random matrices with independent entries and matrix superconcentration inequalities
Spectral clustering and community detection
Anderson Ye Zhang. Fundamental Limits of Spectral Clustering in Stochastic Block Models
Alexandra Carpentier, Christophe Giraud, Nicolas Verzelen. Phase Transition for Stochastic Block Model with more than $\sqrt{n} $ Communities
Phuc Tran, Van Vu. New perturbation bounds for low rank approximation of matrices via contour analysis
Spiked model
Alice Guionnet, Justin Ko, Florent Krzakala, Pierre Mergny, Lenka Zdeborová. Spectral Phase Transitions in Non-Linear Wigner Spiked Models
Michael J. Feldman. Spectral Properties of Elementwise-Transformed Spiked Matrices
Zhangsong Li. The Algorithmic Phase Transition in Correlated Spiked Models
Pravesh K. Kothari, Jeff Xu. Smooth Trade-off for Tensor PCA via Sharp Bounds for Kikuchi Matrices
Jingqiu Ding, Samuel B.Hopkins, David Steurer. Estimating Rank-One Spikes from Heavy-Tailed Noise via Self-Avoiding Walks
Nonlinear random matrix model
Lucas Benigni, Elliot Paquette. Eigenvalue distribution of the Neural Tangent Kernel in the quadratic scaling
Chiraag Kaushik, Justin Romberg, Vidya Muthukumar. A general technique for approximating high-dimensional empirical kernel matrices
Zhichao Wang, Denny Wu, Zhou Fan. Nonlinear spiked covariance matrices and signal propagation in deep neural networks
Deep learning theory
Nikhil Ghosh, Mikhail Belkin. A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors
Licong Lin, Jingfeng Wu, Sham Kakade, Peter Bartlett, Jason Lee. Scaling Laws in Linear Regression: Compute, Parameters, and Data
Jimmy Ba, Murat A Erdogdu, Taiji Suzuki, Zhichao Wang, Denny Wu. Learning in the presence of low-dimensional structure: a spiked random matrix perspective
Yatin Dandi, Florent Krzakala, Bruno Loureiro, Luca Pesce, Ludovic Stephan, How two-layer neural networks learn, one (giant) step at a time
High-dimensional statistics
Zhidong Bai, Wang Zhou. Large sample covariance matrices without independence structures in columns
Parham Rezaei, Filip Kovacevic, Francesco Locatello, Marco Mondelli. High-dimensional Analysis of Synthetic Data Selection
Alden Green, Elad Romanov. The High-Dimensional Asymptotics of Principal Component Regression