Research
My research aims to rigorously measure the uncertainty of predictive modeling methods. Further, I aim to utilize this uncertainty information to improve decision making (such as those found in optimization problems). While I am most interested in physics and engineering phenomena, many of the methods I develop and work with are model-agnostic, meaning that they can find broad application to any field. Below are some topics of interest!
(please click the images below for their original sources, where appropriate)
Multi-Fidelity Bayesian Optimal Experimental Design (OED)
When experiments are costly, time-demanding, or dangerous, carefully selecting the conditions at which to run experiments can provide considerable value. The OED framework uses Bayesian statistics to tell us the best designs for our limited experimental runs, but the problem can be incredibly computationally intensive for complex physical phenomena. My work aims to expedite this process via multi-fidelity methods, where cheaper, less accurate low-fidelity models can be used in conjunction with a high-fidelity model in order to reduce the uncertainty of our optimization process while keeping computational costs low. Further, I am interested in the setting where model statistics (i.e. model costs and covariance information) are not exactly known. In these cases, how does our pilot sampling method impact the uncertainty of our result, and how can we mitigate that through intelligent sampling?
Surrogate Modeling with Uncertainty
In many uses such as uncertainty quantification (UQ) and design optimization, mathematical models of engineering phenomena must be run many, many times. When these models are computationally intensive, this can present a prohibitive bottleneck. Surrogate models are faster approximations of the high-fidelity model that are used in lieu of the high-fidelity model during the UQ or optimization process. However, when this replacement occurs, it can become even more important to have an understanding of how accurate the surrogate may be. Of note, modern developments in data-driven methods such as neural networks are increasingly popular but also more hidden from the user, which can further deteriorate the user's understanding of the accuracy of the model. My work aims to provide methods for estimating this uncertainty, embed uncertainty into data-driven surrogates, and utilize ideas of robustness in the end-use of the surrogate models.
Robust Optimization Under Uncertainty
When making decisions or performing optimization using a mathematical model, the error from modeling simplifications and uncertain parameters can lead to bad decisions and incorrect optima. Robust optimization and decision-making considers this uncertainty and seeks to find optima that are robust to changes in parameters or provide less risk in the context of uncertainty. I am particularly interested in contexts where the optimization method can utilize various sources of information (e.g. the multi-fidelity setting) and where computational budget is the primary constraint during the optimization process. In these cases, how much of my budget should I allocate to each evaluation of my objective function? Which sources of information should be used, and in what quantity? Finally, how certain can I be in my answer, and what is the marginal improvement of allocating additional resources to refine our answer?