My research aims to make multi-fidelity methods more amenable to high-impact applications such as Bayesian OED, stochastic optimization, and uncertainty quantification for very computationally intensive modeling scenarios. While I am most interested in applying these methods to physics and engineering phenomena, many of the methods I develop and work with are model-agnostic, meaning that they can find broad application to any field where there is uncertainty and decision-making involved. Below are some highlights of what I have been working on, as well as my publications.
When experiments are costly, time-demanding, or dangerous, carefully selecting the conditions at which to run experiments can provide considerable value. When the goal of a given experiment is model-parameter inference, this process generally requires the use of a double-nested Monte Carlo (DNMC) estimator to approximate the Expected Information Gain (EIG) of each experimental design. Using DNMC estimators can be prohibitively expensive in complex physical systems in which accurate models are computationally intensive. To accelerate the OED process, we designed a novel multi-fidelity EIG (MF-EIG) estimator in which an ensemble of utility models of varying accuracy and cost are combined into a single EIG estimator via the approximate control variate (ACV) method. The MF-EIG estimator can enable several orders-of-magnitude speed-ups in comparison to single-fidelity DNMC methods, greatly reducing the computational demands of OED.
Preprint is available at:
https://arxiv.org/abs/2501.10845
Stochastic optimization (also often referred to as optimization under uncertainty) can be prohibitively expensive for high-fidelity models, generally requiring sampling-based estimation of model statistics within an outer optimization loop. To expedite estimation of these high-fidelity model statistics, multi-fidelity estimators (e.g., ACV) often produce orders-of-magnitude computational savings. However, such estimators generally require knowledge of both model costs and covariances to determine estimator hyper-parameters. The model covariances in particular are usually not known a priori and are thus typically estimated via independent and potentially expensive and wasteful pilot sampling that would need to be done at every iteration of a design-optimization loop. To combat this issue, we apply a novel probabilistic modeling technique to the covariance information. We provide an active learning strategy for efficient pilot sampling to provide improved covariance estimates, and thus improved ACV hyperparameters, at the most important designs towards the outer optimization loop. This strategy allows us to quantify uncertainties in covariance estimates and rapidly draw samples of model correlations at design locations without pilot samples, accelerating ACV estimation in the process.
Throughout our work, we have seen that multi-fidelity estimation techniques such as ACV can provide vital computational savings. However, to enable multi-fidelity estimation techniques in an error-optimal way, the covariance matrix across model outputs must be estimated from independent pilot model evaluations, incurring a significant but often ignored computational cost. The problem of optimally allocating computational resources between the model evaluations needed for covariance estimation and the model evaluations needed for ACV estimation remains unsolved, as existing methods do not accommodate ACV, leverage biased estimators, and only are valid in the asymptotic case of many pilot samples. In this project, we introduce a novel method for prescribing the budget allocation between covariance estimation and ACV evaluation that leverages uncertainty quantification of the covariance matrix via Bayesian inference. Specifically, we derive an expected loss metric that is adaptively minimized as pilot samples are drawn, informing the user when to terminate pilot sampling. Our metric can be evaluated inexpensively compared to the cost of most high-fidelity model evaluations and can serve as a highly interpretable tool for practitioners of multi-fidelity methods.
M. R. Mazan et al., “Flow interruption compared to forced oscillatory maneuvers and esophageal balloon/pneumotachography for measurement of respiratory resistance in the horse,” Journal of Applied Physiology, vol. 137, no. 3, pp. 591–602, Sep. 2024, doi: 10.1152/japplphysiol.00213.2024.
C. L. Lanaghan et al., “Understanding Process–Structure Relationships during Lamination of Halide Perovskite Interfaces,” ACS Appl. Mater. Interfaces, vol. 16, no. 43, pp. 58657–58667, Oct. 2024, doi: 10.1021/acsami.4c12379.
A. M. Ortiz-Ortiz et al., “Gas-Phase Photocatalytic CO2 Methanation over Ru/TiO2 : Effects of Pressure, Temperature, and Illumination,” J. Phys. Chem. C, vol. 128, no. 43, pp. 18284–18292, Oct. 2024, doi: 10.1021/acs.jpcc.4c05724.
T. Coons, and X. Huan, "A Multi-fidelity Estimator of the Expected Information Gain for Bayesian Optimal Experimental Design." https://arxiv.org/abs/2501.10845
T. Coons, A. Jivani, and X. Huan, "Utilizing Covariance Uncertainty for Efficient Pilot Sampling in Multi-fidelity Estimation with Approximate Control Variates."
T. Coons, A. Jivani, and X. Huan, "Adaptive Correlation Estimation via Active Learning for Multi-fidelity Stochastic Optimization."