I am a PhD student in the Department of Statistics at UCLA. My research focuses on the mathematical theory of deep learning. I am fortunate to be advised by Guido Montúfar.
liangshuang at g.ucla.edu / Google scholar
01/26: Paper on chaos and fractals in gradient descent optimization accepted to ICLR 2026.
11/25: Awarded the John Fellowship (UCLA).
10/25: Invited speaker at a SIAM OP26 mini-symposium (June 2026).
06/25: Invited talk at Scalable Statistical Machine Learning Lab, UCSD.
04/25: Invited participant at an Oberwolfach (MFO) workshop (March 2026).
01/25: Paper on implicit bias of mirror descent in ReLU networks accepted to ICLR 2025.
I aim to better understand the learning of neural networks from optimization perspectives. In particular, I am excited about:
Optimization dynamics, e.g., training trajectories in parameter space;
Implicit bias of optimization algorithms, i.e., what types of functions an algorithm tends to select among (possibly) countless candidates that perform well on training data;
The influence of network architecture, optimizer, parameter initialization, step size, etc.
Gradient Descent with Large Step Sizes: Chaos and Fractal Convergence Region
Shuang Liang, Guido Montúfar
ICLR 2026
[arXiv] [OpenReview]
Implicit Bias of Mirror Flow for Shallow Neural Networks in Univariate Regression
Shuang Liang, Guido Montúfar
ICLR 2025 (Spotlight)
[arXiv] [OpenReview] [Video]
Pull-back Geometry of Persistent Homology Encodings
Shuang Liang, Renata Turkeš, Jiayi Li, Nina Otter, Guido Montúfar
TMLR 2024
[arXiv] [OpenReview] [Video]