Foundation models, multi-task learning and in-context learning
Approximation theory for deep learning architectures
Machine learning for infinite-dimensional data
Sampling and generative modeling
A Theory of Diversity for Random Matrices, with Applications to the In-Context Learning of Schrödinger Equations, in collaboration with Yulong Lu and Shaurya Sehgal. Submitted. Arxiv.
In-Context Operator learning on the Space of Probability Measures, in collaboration with Dixi Wang, Yineng Chen, Yulong Lu, and Rongjie Lai. Submitted. Arxiv.
In-Context Learning of Linear Systems: Generalization Theory and Application to Operator Learning, in collaboration with Yulong Lu, Wuzhe Xu, and Tianhao Zhang. Submitted. Arxiv
Flow Matching for Multimodal Distributions, in collaboration with Gaoxiang Luo, Sihang Zang, Yuxiang Wan, Yulong Lu, and Ju Sun. CVPR 2026. Website.
In-Context Learning of Linear Dynamical Systems with Transformers: Approximation bounds and Depth-Separation, in collaboration with Yuxuan Zhao, Yulong Lu, and Tianhao Zhang. NeurIPS 2025. Arxiv
Score-Based Generative Models Break the Curse of Dimensionality in Learning a Family of Sub-Gaussian Probability Distributions, in collaboration with Yulong Lu. ICLR 2024 Arxiv, poster
Minisymposium on "In-context learning for PDEs and inverse problems", SIAM UQ 2026, March 2026.
Minisymposium on "Physics-guided and generative AI for scientific computing: theory, algorithms and applications, SIAM NNP Sectional Meeting, November 2024.
Data Science Seminar, University of Minnesota, October 2024.
Minisymposium on "Advances in generative models, differential equations, and inverse problems, SIAM Imaging Science 2024, May 2024.