I am a PhD student in the Department of Statistics and Data Science at UCLA. I work on the mathematical theory of deep learning. I am fortunate to be advised by Guido Montúfar.
liangshuang at g.ucla.edu / Google scholar
Jun 26: Invited talk at the SIAM OP26 mini-symposium "Global Optimization for Neural Networks".
Mar 26: Invited talk at the MFO workshop "Modern and Emerging Phenomena in Machine Learning".
Jan 26: Paper on chaos and fractals in gradient descent optimization accepted to ICLR 2026.
Nov 25: Awarded the John Fellowship (UCLA).
Jun 25: Invited talk at the Scalable Statistical Machine Learning Lab, UCSD.
Jan 25: Paper on implicit bias of mirror descent in ReLU networks accepted to ICLR 2025.
I aim to better understand the learning of neural networks from optimization perspectives. In particular, I am excited about:
Optimization dynamics, e.g., training trajectories in parameter space;
Implicit bias of optimization algorithms, i.e., what types of functions an algorithm tends to select among (possibly) countless candidates that perform well on training data;
The influence of network architecture, optimizer, parameter initialization, step size, etc.
Gradient Descent with Large Step Sizes: Chaos and Fractal Convergence Region
Shuang Liang, Guido Montúfar
ICLR 2026
[arXiv] [OpenReview]
Implicit Bias of Mirror Flow for Shallow Neural Networks in Univariate Regression
Shuang Liang, Guido Montúfar
ICLR 2025 (Spotlight)
[arXiv] [OpenReview] [Video]
Pull-back Geometry of Persistent Homology Encodings
Shuang Liang, Renata Turkeš, Jiayi Li, Nina Otter, Guido Montúfar
TMLR 2024
[arXiv] [OpenReview] [Video]