Elizabeth Qian

von Kármán Instructor at Caltech

I am a von Kármán Instructor in the Department of Computing + Mathematical Sciences at Caltech. My research lies at the intersection of computational science and engineering application, and is motivated by the need for computational methods used in engineering decision-making to be efficient and scalable. In particular, I am interested in model reduction and scientific machine learning for engineering systems, and in multi-fidelity formulations for uncertainty quantification and optimization.

I completed my PhD in Computational Science & Engineering at MIT, where I worked with Karen Willcox as a student in both the Center for Computational Science and Engineering and the Department of Aeronautics & Astronautics. As a graduate student, I was supported by the NSF Graduate Research Fellowship and the Fannie and John Hertz Foundation Fellowship. Prior to starting graduate studies, I spent a year on a Fulbright at RWTH Aachen University working with Karen Veroy-Grepl and Martin Grepl. I obtained my SB and SM degrees in Aerospace Engineering from MIT in 2014 and 2017.

Upcoming talks & activities

May 2022: I will attend the ICERM Reunion Event for the Spring 2020 Semester Program on Model and Dimension Reduction.

June 2022: (1) I will present our work on Balanced Truncation for Bayesian Inference in the Oxford Computational Mathematics & Applications Seminar on June 2.

(2) At the US National Congress on Theoretical and Applied Mechanics in Austin, TX, I will present our recent work on the cost-accuracy trade-off in learning neural operators.

(3) I will attend the workshop celebrating 30 years of Acta Numerica at the Banach Center in Będlewo, Poland.

July 2022: I will present our paper on reduced operator inference for nonlinear PDEs at the SIAM Annual Meeting in Pittsburgh, PA.

Recent news

May 2022: I am a 2022 recipient of the departmental Gradient for Change Award for contributions toward making Caltech a more diverse, equitable, and inclusive environment.

March 2022: New manuscript on the cost-accuracy trade-off in operator learning with neural networks is out on arXiv. This work with Daniel Huang, Andrew Stuart, and Maarten de Hoop provides detailed numerical studies of the complexity question for neural net approximations of PDE-governed mappings between function spaces.

Our paper, Model reduction for linear dynamical systems via balancing for Bayesian inference, has appeared in the Journal of Scientific Computing.

February 2022: Our paper, Reduced operator inference for nonlinear partial differential equations, has been accepted for publication in the SIAM Journal on Scientific Computing.

News Archive