2:30pm - 3:00pm
Dr. Salah Aldin Faroughi
Ingram School of Engineering, Texas State University
Title: Deep Learning in Scientific Computing: Physics-Guided, Informed, and Encoded Neural Networks
Abstract: The emergence of multiteraflop machines with thousands of processors for scientific computing combined with advanced sensory-based experimentation has heralded an explosive growth of disparate, nonuniform, unstructured data (i.e., sparse data) in science and engineering fields. The grand challenge to model such datasets is the lack of capability to comprehensively bridge phenomena that occur at temporal scales from tens of nanoseconds to seconds or spatial scales from nanometers to meters. To resolve the issue regarding complex data modeling, machine learning (ML) and deep learning (DL) algorithms have been extensively employed, albeit at the cost of accuracy and loss of generality. The DL approach, in particular, is claimed to mimic the human brain allowing it to learn from a complex dataset. It is, thus, facilitating various advancements across a broad spectrum of scientific research, particularly in fluid mechanics, solid mechanics, and material design. Deep learning for scientific computing is another emerging field in applied mathematics. In this field, DL methods are employed to replace a bottleneck step, fully or partially, in the scientific computing process and accelerate direct and/or inverse numerical simulations. To accomplish these aims, three different approaches have been proposed: (i) physics-guided neural networks (PgNNs), (ii) physics-informed neural networks (PiNNs), and (iii) physics-encoded neural networks (PeNNs). In PgNNs, the DL model is constructed as a black-box to learn a surrogate mapping from the formatted inputs to the outputs (generated by advanced experiments or high-fidelity numerical simulations). However, due to the intrinsic architecture of conventional DL methods, they are all limited to the scope of the sparse datasets with which the training is done, and inference cannot be scoped under any unseen conditions. PiNNs are architectured to handle supervised learning tasks while respecting given laws of physics described by general nonlinear differential equations. Through tailoring the loss function, the PiNN penalizes the network for failing to follow physical laws. PeNNs are another family of DL approaches for scientific computing that leverage physical-driven or knowledge-based constraints in their architecture to resolve issues with data sparsity, and the lack of generalization encountered by both PgNNs and PiNNs models. In PeNNs, the known physics of non-linear dynamics of multiscale systems is forcibly encoded into cutting-edge neural network architectures to facilitate learning in a data-driven manner. This talk presents state-of-the-art applications of the above algorithms in fluid mechanics, examines their limitations, and then discusses the routes for scientific deep learning to take in the future.