Reliable and Efficient Uncertainty Quantification using AI
I explored how to effectively quantify uncertainty in multi-output regression tasks, which is essential for making informed decisions in engineering. Traditional models like Gaussian Process Regression (GPR) have been widely used for this purpose, but they come with high computational costs, especially for large datasets and multiple outputs. To address this, I investigated the use of deep ensembles (DE) as a scalable alternative, which involves training multiple neural networks in parallel. Unlike GPR, the DE method can handle large datasets and multiple outputs more efficiently, offering a balance between prediction accuracy and uncertainty estimation.
One challenge I tackled was ensuring the reliability of the uncertainty estimated by the deep ensemble models. Interestingly, I found that increasing the number of neural networks in the ensemble can lead to an underestimation of uncertainty, making the model overly confident in its predictions. To overcome this, I proposed the use of a post-hoc calibration method that adjusts the uncertainty estimation after training the ensemble. This straightforward approach significantly improved the quality of uncertainty quantification, allowing the model to provide more reliable confidence intervals for its predictions. This adjustment is crucial, especially in engineering applications where understanding the model’s limitations can prevent costly errors.
The potential impact of this framework extends beyond just providing reliable uncertainty estimates. In tasks like Bayesian optimization, which rely heavily on uncertainty information to explore design spaces, using the calibrated deep ensemble framework helps guide the optimization process more effectively. By offering both efficiency in training and enhanced accuracy in uncertainty estimation, this method can be seamlessly applied to various regression tasks, advancing the capabilities of engineering decision-making processes.
Moreover, this research represents the first quantitative investigation into the accuracy of uncertainty estimation by AI models in the fields of mechanical and aerospace engineering. While uncertainty quantification has been a topic of interest, my work rigorously evaluated how well deep ensemble models can estimate uncertainty. By implementing post-hoc calibration, I was able to demonstrate that these models can produce reliable confidence intervals, a critical step forward for using AI-based uncertainty quantification in practical engineering applications.