Closing Session (ML2) Details

Wednesday, August 11, PM session, 1:30-4:30

(All times for the workshop are listed in Central Time, UTC -5)


YOUTUBE LINKS: Please go to the KGML YouTube Channel for all available recorded presentations.

Session Organizers: Arindam Banerjee, Imme Ebert-Uphoff, Xiaowei Jia, Vipin Kumar, Michael Steinbach

SPEAKERS:

2:55-3:10 BREAK

"Approximating functions, functionals, and operators using deep neural networks for diverse applications"

Abstract: We will present a new approach to develop a data-driven, learning-based framework for predicting outcomes of physical systems and for discovering hidden physics from noisy data. We will introduce a deep learning approach based on neural networks (NNs) and generative adversarial networks (GANs). Unlike other approaches that rely on big data, here we “learn” from small data by exploiting the information provided by the physical conservation laws, which are used to obtain informative priors or regularize the neural networks. We will demonstrate the power of PINNs for several inverse problems, and we will demonstrate how we can use multi-fidelity modeling in monitoring ocean acidification levels in the Massachusetts Bay. We will also introduce new NNs that learn functionals and nonlinear operators from functions and corresponding responses for system identification. The universal approximation theorem of operators is suggestive of the potential of NNs in learning from scattered data any continuous operator or complex system. We first generalize the theorem to deep neural networks, and subsequently we apply it to design a new composite NN with small generalization error, the deep operator network (DeepONet), consisting of a NN for encoding the discrete input function space (branch net) and another NN for encoding the domain of the output functions (trunk net). We demonstrate that DeepONet can learn various explicit operators, e.g., integrals, Laplace transforms and fractional Laplacians, as well as implicit operators that represent deterministic and stochastic differential equations. More generally, DeepOnet can learn multiscale operators spanning across many scales and trained by diverse sources of data simultaneously.

Bio: George Karniadakis is from Crete. He received his S.M. and Ph.D. from Massachusetts Institute of Technology (1984/87). He was appointed Lecturer in the Department of Mechanical Engineering at MIT and subsequently he joined the Center for Turbulence Research at Stanford / Nasa Ames. He joined Princeton University as Assistant Professor in the Department of Mechanical and Aerospace Engineering and as Associate Faculty in the Program of Applied and Computational Mathematics. He was a Visiting Professor at Caltech in 1993 in the Aeronautics Department and joined Brown University as Associate Professor of Applied Mathematics in the Center for Fluid Mechanics in 1994. After becoming a full professor in 1996, he continued to be a Visiting Professor and Senior Lecturer of Ocean/Mechanical Engineering at MIT. He is an AAAS Fellow (2018-), Fellow of the Society for Industrial and Applied Mathematics (SIAM, 2010-), Fellow of the American Physical Society (APS, 2004-), Fellow of the American Society of Mechanical Engineers (ASME, 2003-) and Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA, 2006-). He received the SIAM/ACM Prize on Computational Science & Engineering (2021), the Alexander von Humboldt award in 2017, the SIAM Ralf E Kleinman award (2015), the J. Tinsley Oden Medal (2013), and the CFD award (2007) by the US Association in Computational Mechanics. His h-index is 111 and he has been cited over 58,500 times.

"Data-driven model discovery and physics-informed learning"

Abstract: A major challenge in the study of dynamical systems is that of model discovery: turning data into reduced order models that are not just predictive, but provide insight into the nature of the underlying dynamical system that generated the data. We introduce a number of data-driven strategies for discovering nonlinear multiscale dynamical systems and their embeddings from data. We consider two canonical cases: (i) systems for which we have full measurements of the governing variables, and (ii) systems for which we have incomplete measurements. For systems with full state measurements, we show that the recent sparse identification of nonlinear dynamical systems (SINDy) method can discover governing equations with relatively little data and introduce a sampling method that allows SINDy to scale efficiently to problems with multiple time scales, noise and parametric dependencies. For systems with incomplete observations, we show that the Hankel alternative view of Koopman (HAVOK) method, based on time-delay embedding coordinates and the dynamic mode decomposition, can be used to obtain a linear models and Koopman invariant measurement systems that nearly perfectly captures the dynamics of nonlinear quasiperiodic systems. Neural networks are used in targeted ways to aid in the model reduction process. Together, these approaches provide a suite of mathematical strategies for reducing the data required to discover and model nonlinear multiscale systems.

Bio: Nathan Kutz is the Yasuko Endo and Robert Bolles Professor of Applied Mathematics at the University of Washington, having served as chair of the department from 2007-2015. He received the BS degree in physics and mathematics from the University of Washington in 1990 and the Phd in applied mathematics from Northwestern University in 1994. He was a postdoc in the applied and computational mathematics program at Princeton University before taking his faculty position. He has a wide range of interests, including neuroscience to fluid dynamics where he integrates machine learning with dynamical systems and control.

"Physics-Guided Uncertainty Quantification for Scientific Machine Learning and Risk-Informed Decision"

Abstract: This presentation will discuss how physics-guided uncertainty quantification in complex spatiotemporal dynamical systems can enhance the credibility of scientific machine learning discoveries and enable translation to risk-informed decisions and policy. Research and translational barriers as well as methods and solutions will be discussed in the context of two interconnected grand challenge areas: (a) predictive understanding of the water cycle within the earth system, and (2) preparedness to hydrological extremes for ensuring the resilience of coupled natural-human systems.

Bio: Auroop Ganguly is a Professor at Northeastern University in Boston, MA, and a joint Chief Scientist at the Pacific Northwest National Laboratory in Richland, WA. His research interests encompass climate risks and infrastructural resilience with spatiotemporal machine learning and complex network science. Prior to Northeastern, he has worked at the US DOE's Oak Ridge National Laboratory and at Oracle Corporation. He is a co-founder and the chief scientific adviser of risQ, a Boston-based climate analytics company. Ganguly is an ASCE Fellow and obtained a PhD from MIT.

"Differential Graph Neural Networks for Physics-Informed AI Models"

Abstract: Deep learning has achieved significant successes in prediction performance by learning latent representations from data-rich applications, but we are confronted with many challenging learning scenarios in modeling natural phenomenon, where a limited number of labeled examples are available or there is much noise in the data. Furthermore, there could be constant changes in data distributions (e.g. dynamic systems). Therefore, there is a pressing need to develop new generation deeper and robust learning models that can address these challenging learning scenarios. In this talk, I will discuss our recent work on differential graph neural networks for physics-Informed AI models via meta-learning and causal inference.

Bio: Yan Liu is a Professor in the Computer Science Department and the Director of Machine Learning Center at University of Southern California. She was a Research Staff Member at IBM Research in 2006-2010 and Chief Scientist in Didi Chuxing in 2018. She received her Ph.D. degree from Carnegie Mellon University. Her research interest is machine learning and its applications to health care, sustainability and social network analysis. She has received several awards, including ACM Distinguished Member, NSF CAREER Award, Okawa Foundation Research Award, New Voices of Academies of Science, Engineering, and Medicine, Biocom Catalyst Award Winner, and ACM Dissertation Award Honorable Mention.

"Topology in Machine Learning"

Abstract: How do you vectorize geometry for use in machine learning problems? In this talk I will introduce persistent homology, a popular technique for incorporating geometry and topology in machine learning. I will survey applications arising from machine learning tasks in materials science, computer vision, and agent-based modeling, and describe how these techniques are related to explainable machine learning.

Bio: Professor Adams' research interests are in computational topology and geometry, quantitative topology, and topology applied to data analysis. His theoretical work has illuminated the structure of Vietoris-Rips simplicial complexes, a popular tool for approximating the shape of a dataset via persistent homology. He has applied topology to machine learning, computer vision, coverage problems in minimal sensing, collective motion models, and energy landscapes arising in chemistry. Professor Adams is the co-director of the Applied Algebraic Topology Research Network, with over 20 hours watched per day on its YouTube channel.

"Learning for long range temporal prediction"

Abstract: Several important scientific and societal problems need long range temporal predictions, e.g., sub-seasonal climate forecasting, Atlantic hurricane projections, financial and economic forecasts, etc. We will discuss how such long range predictions are challenging both for physics based models, which rely on dynamical models whose approximation errors accrue over time without observations, and for machine learning models, which have been primarily designed for short range temporal prediction. We will further discuss our recent advances in sub-seasonal climate forecasting using machine learning models, including a fine grained error analysis, comparisons with climate models, and promising improvements by leveraging such climate models.

Bio: Arindam Banerjee is a Founder Professor at the Department of Computer Science, University of Illinois Urbana-Champaign. His research interests are in machine learning and data mining, especially on problems involving geometry and randomness. His current research focuses on computational and statistical aspects of deep learning, spatial and temporal data analysis, and sequential decision making problems. His work also focuses on applications in complex real-world problems in different areas including climate science, ecology, recommendation systems, and finance, among others. He has won several awards, including the NSF CAREER award (2010), the IBM Faculty Award (2013), and six best paper awards in top- tier venues.