Optimization has played a significant role in many areas such as engineering, sciences, health care. My research focuses on the design and analysis of (stochastic) first-order methods for nonlinear optimization problems. I am strive to develop algorithms that are simple yet efficient. The core idea is to simplify a complex problem by transforming them into a simpler one, thereby making the overall algorithm more tractable. Representative examples of such transformations include exact penalization, smoothing, and convexification. These techniques enable us to apply well-established algorithmic frameworks and yield both strong theoretical guarantees and compelling practical performance. I am particularly interested in
First-order methods for large-scale nonconvex (nonsmooth) constrained optimization
Stochastic (sub)gradient methods for statistical and machine learning
Complexity analysis (iteration, oracle, and computational)
Applications, e.g., fairness in AI, deep neural networks, distributed robust optimization, decentralized distributed learning, semi-supervised learning, bilevel optimization, minimax problems, large language models
I’ve regularly read research on LLMs since August 2022.
I speak to them almost every day.
I use LLMs to support my work in writing, mathematical reasoning, and example construction.
GPT 4.5 is excellent in writing.