Mathematics of generative modeling and in-context learning
Approximation and statistical capabilities of neural networks
Applications to problems arising from science and engineering, including PDEs, dynamical systems, inverse problems, and optimal transport
A Theory of Diversity for Random Matrices, with Applications to the Task Generalization of Transformers, in collaboration with Yulong Lu and Shaurya Sehgal. In preparation.
In-Context Operator learning on the Space of Probability Measures, in collaboration with Dixi Wang, Yineng Chen, Yulong Lu, and Rongjie Lai. Submitted. Arxiv.
In-Context Learning of Linear Systems: Generalization Theory and Application to Operator Learning, in collaboration with Yulong Lu, Wuzhe Xu, and Tianhao Zhang. Submitted. Arxiv
In-Context Learning of Linear Dynamical Systems with Transformers: Approximation bounds and Depth-Separation, in collaboration with Yuxuan Zhao, Yulong Lu, and Tianhao Zhang. NeurIPS 2025. Arxiv
Score-Based Generative Models Break the Curse of Dimensionality in Learning a Family of Sub-Gaussian Probability Distributions, in collaboration with Yulong Lu. ICLR 2024 Arxiv, poster
Minisymposium on "Physics-Guided and Generative AI for Scientific Computing: Theory, Algorithms and Applications, SIAM NNP Sectional Meeting, November 2024.
Data Science Seminar, University of Minnesota, October 2024.
Minisymposium on "Advances in Generative Models, Differential Equations, and Inverse Problems, SIAM Imaging Science 2024, May 2024.