Xiaodong Feng (Faculty of Science and Technology, Beijing Normal University-Hong Kong Baptist University United International College)
Title: Information bottleneck based uncertainty quantification
Abstract: In this talk, We present a new framework for uncertainty quantification via information bottleneck (IB-UQ) in scientific machine learning tasks, including deep neural regression and neural operator learning. IB-UQ can provide uncertainty estimates in the label prediction by explicitly modeling the representation variables. Moreover, IB-UQ can be trained with noisy data and provide accurate predictions with reliable uncertainty estimates on unseen data. We also present the physics-informed version of IB-UQ for PDE-related problems. The capability of the proposed IB-UQ framework is demonstrated with numerical examples.
Jiaxi Gu (POSTECH)
Title: Third-order finite difference WENO schemes with neural network
Abstract: We present the finite difference WENO schemes based on the neural network for hyperbolic conservation laws. The supervised learning is employed with the training data consisting of the three-point stencils. We choose WENO3-JS weights and exact derivatives as labels. We design several loss functions. Some loss function forces the model to maintain the essentially non-oscillatory behavior while reducing the dissipation around discontinuities. Some loss function suppresses the oscillation while improving the accuracy in smooth regions. We employ the Delta layer to pre-process the input. Numerical examples illustrate the good performance of the proposed WENO schemes.
Taeyeop Lee (POSTECH)
Title: Compressible Stokes flows in the domain with cut boundary
Abstract: In this talk we study a compressible Stokes flows in a domain with cut boundary. The cut is a non-Lipschitz boundary whose tip has 2π-opening angle. The divergence of the leading corner singularity vector function has different trace values on either side of the cut. In consequence the pressure solution derived by the continuity equation must have a jump across the streamline emanating from the cut tip and the pressure gradient in the momentum equation is not well-defined.
To handle this, we construct an auxiliary vector function called lifting function. It lifts the pressure jump value on the curve into the region. We split the velocity solution into the lifting function plus the singular one plus a smoother one. Similarly, we split the pressure solution. With these results, we establish piecewise regularity of the solutions of compressible Stokes system.
Haojiong Shangguan (Faculty of Science and Technology, Beijing Normal University-Hong Kong Baptist University United International College)
Title: Deep neural networks for evolution equations
Abstract: To address the challenge of long-time integration in solving evolution equations, we propose two novel deep learning-based approaches: Hybrid FEM-PINNs and Integration-Regularization PINNs (IR-PINNs). The hybrid FEM-PINNs method represents the solution as an expansion of finite element basis functions in time, with the coefficients modeled by a neural network taking spatial variables as inputs. This approach effectively decouples the temporal and spatial dependencies and eliminates statistical errors in the time-direction integration. The IR-PINNs method directly represents the solution using a neural network, with a regularization term in integral form added to the loss function to improve generalization and control the solution's behavior. Both methods aim to enhance accuracy and computational efficiency while addressing the challenges of long-time integration in evolution equations. Additionally, a deep adaptive sampling method is incorporated into both approaches to dynamically refine the training dataset during the training process, focusing computational resources on critical regions of the solution space.
Jun Sur Park (KIAS)
Title: tLaSDI: Thermodynamics-informed latent space dynamics identification
Abstract: We propose a latent space dynamics identification method, namely tLaSDI, that embeds the first and second principles of thermodynamics. The latent variables are learned through an autoencoder as a nonlinear dimension reduction model. The latent dynamics are constructed by a neural network-based model that precisely preserves certain structures for the thermodynamic laws through the GENERIC formalism. An abstract error estimate is established, which provides a new loss formulation involving the Jacobian computation of autoencoder. The autoencoder and the latent dynamics are simultaneously trained to minimize the new loss. Computational examples demonstrate the effectiveness of tLaSDI, which exhibits robust generalization ability, even in extrapolation. In addition, an intriguing correlation is empirically observed between a quantity from tLaSDI in the latent space and the behaviors of the full-state solution.