Title: Speeding up Physics-Informed Machine Learning: from linear algebra to approximation theory
Abstract: Physics-Informed Neural Network (PINN) has proven itself a powerful tool to obtain the numerical solutions of nonlinear partial differential equations (PDEs) leveraging the expressivity of deep neural networks and the computing power of modern heterogeneous hardware. However, its training is still time-consuming, especially in the multi-query and real-time simulation settings, and its parameterization often overly excessive.
After introducing the Reduced Basis Method (RBM) from the perspectives of linear algebra and approximation theory, this talk is dedicated to the recently proposed Generative Pre-Trained PINN (GPT-PINN) and its nonlinear and sparsified variants. Inspired by RBM, GPT-PINN mitigates both challenges PINNs face in the setting of parametric PDEs in the linear reduction regime. It represents a brand-new meta-learning paradigm for parametric systems. As a network of networks, its outer-/meta-network contains only one hidden layer having significantly reduced number of neurons. Moreover, its activation function at each hidden neuron is a (full) PINN pre-trained at a judiciously selected system configuration. The meta-network is capable of generating surrogate solutions for the parametric system across the entire parameter domain accurately and efficiently. GPT-PINN's nonlinear and sparsified versions extend the methodology to the nonlinear reduction regime and further speedup the training and inference via orders of magnitude of sparsification done to the collocation set.