second Symposium on

Science-Guided AI

(SGAI-AAAI-21)

Held as part of AAAI Fall Symposium Series (FSS) 2021

from November 4-6, 2021

Taking place VIRTUALLY on Zoom (private link shared to registered attendees)

Register for the symposium using the link below!

Overview

Science-guided AI is an emerging paradigm of research that aims to principally integrate scientific knowledge in AI models and algorithms to learn patterns and relationships from data that are not only accurate on validation data but are also scientifically consistent with known scientific theories. Science-guided AI is ripe with research opportunities to influence fundamental advances in AI for accelerating scientific discovery and has already begun to gain attention in several scientific communities including fluid dynamics, quantum chemistry, biology, hydrology, and climate science.

The goal of this symposium is to nurture the community of researchers working at the intersection of AI and scientific areas and shape the vision of the rapidly growing field of SGAI.

Our symposium will involve a mix of activities including invited talks, breakout sessions, panel discussions, and paper presentations from researchers working in the area of SGAI in academia, industry, and national labs. We will build upon the success of the first symposium on PGAI held as part of the AAAI Fall Symposium Series (FSS) 2020.

Program

We have an exciting line-up of 6 keynote talks, 8 invited talks, and 18 contributed paper presentations. See schedule below (all times are in Eastern Time zone)

Day 1: Nov 4, 2021


9:00 AM - 9:10 AM


Welcome and Introduction


9:10 AM - 9:55 AM

Keynote Talk by Michael Mahoney, UC Berkeley, "Characterizing possible failure modes in physics-informed neural networks"

Abstract: Recent work in scientific machine learning (ML) has developed physics-informed neural networks (PINNs). The typical approach incorporates physical domain knowledge as soft constraints on an empirical loss function and uses existing ML methodologies to train the model. We demonstrate that, while existing PINN methodologies can learn good models for simple problems, they can easily fail to learn relevant physical phenomena even for simple PDEs. In particular, we analyze several distinct situations of widespread physical interest, including learning differential equations with convection, reaction, and diffusion operators. We provide evidence that the soft regularization in PINNs, which involves differential operators, can introduce a number of subtle problems, including making the problem ill-conditioned. These possible failure modes are not due to the lack of expressivity in the NN architecture. Instead, the PINN's setup makes the loss landscape very hard to optimize. We then describe two promising solutions to address these failure modes. Time permitting, we will also discuss failures with existing approaches designed to learning continuous dynamical systems, as well as how several of these issues can be addressed by embedding the model into higher-order numerical integration schemes.

Bio: Michael W. Mahoney is at the University of California at Berkeley in the Department of Statistics and at the International Computer Science Institute (ICSI). He is also an Amazon Scholar as well as a faculty scientist at the Lawrence Berkeley National Laboratory. He works on algorithmic and statistical aspects of modern large-scale data analysis. Much of his recent research has focused on large-scale machine learning, including randomized matrix algorithms and randomized numerical linear algebra, geometric network analysis tools for structure extraction in large informatics graphs, scalable implicit regularization methods, computational methods for neural network analysis, physics informed machine learning, and applications in genetics, astronomy, medical imaging, social network analysis, and internet data analysis. He received his PhD from Yale University with a dissertation in computational statistical mechanics, and he has worked and taught at Yale University in the mathematics department, at Yahoo Research, and at Stanford University in the mathematics department.


9:55 AM - 10:20 AM

Invited Talk by Youzuo Lin, LANL,"Physics-guided Learning-driven Computational Seismic Imaging: from Synthetic Practice to Field Applications"

Abstract: Computational seismic imaging is crucial for energy exploration, civil infrastructure, groundwater contamination and remediation, and so on. However, the relevant data analysis capability for solving computational seismic imaging problems is inadequate, mainly due to the ill-posed nature of the problems and the high computational costs. Recently, machine learning (ML) based computational methods have been pursued in the context of scientific computational imaging problems. Some success has been attained when an abundance of simulations and labels are available. Nevertheless, ML models, trained using physical simulations, usually suffer from weak generalizability when applied in a moderately different real-world dataset. Moreover, obtaining corresponding training labels is typically prohibitively expensive due to the high demand for subject-matter expertise. On the other hand, different from imaging problems from a typical computer vision context, many scientific imaging problems are governed by underlying physical equations. For example, the wave equation is the governing physics for seismic imaging problems. To fully unleash the power and flexibility of ML for solving large-scale computational seismic imaging problems, we have developed new computational methods to bridge the technical gap by addressing the critical issues of generalizability and data scarcity. In this talk, I will go through our recent R&D efforts in leveraging both the power of machine learning and underlying physics. A series of numerical experiments are conducted using datasets from synthetic simulations to field applications to evaluate the effectiveness of our techniques.

Bio: Youzuo Lin is a Computational Scientist at the Earth and Environmental Sciences Division of Los Alamos National Laboratory. Before joining as a staff scientist at LANL, he completed his Ph.D. in Applied and Computational Mathematics at Arizona State University. His current research focuses on physics-informed machine learning, deep learning, computational methods, and their applications in computational imaging, signal and image analysis. Specifically, he has worked on subsurface imaging for energy exploration, medical imaging and cancer detection, and time series classification for small earthquake detection.


10:20 AM - 10:50 AM


Break


10:50 AM - 11:35 AM

Keynote Talk by Peter Battaglia, DeepMind, "Physical inductive biases for learning simulation and scientific discovery"

Abstract: This talk will explore both how our knowledge of physics can improve our machine learning approaches, and how our machine learning tools can be used to improve our knowledge of physics. I'll describe work my collaborators and I have done using particle- and mesh-based approaches for learning simulation, how we leverage inductive biases about ODEs, Hamiltonian and Lagrangian mechanics in learned simulators, and how we can use neural networks with symbolic regression to discover physical governing equations from simulated and real data.

Bio: Peter Battaglia is a research scientist at DeepMind. Previously he was a postdoc and research scientist in MIT's Department of Brain and Cognitive Sciences. His current work focuses on approaches for reasoning about and interacting with complex systems, by combining richly structured knowledge with flexible learning algorithms.


11:35 AM - 12:00 PM

Invited Talk by Manish Marwah, MicroFocus, "Robust AI for Cyber-Physical Systems"

Abstract: We are heading towards a cyber-physical age, where increasingly the engineered physical systems of the past such as transportation infrastructure, power infrastructure, buildings and factories are being integrated with computation and communication systems for the goals of autonomous operation, optimization, maintenance and personalization. These infrastructures are critical to modern life and any disruption or inefficiency in their operation has economic, environmental and societal costs. Given their complexity, AI and machine learning are important tools for achieving the above goals. However, before AI can be deployed widely in real-world critical cyber-physical systems (CPS), it needs to be made more robust. Some of the key challenges include: 1) How can an AI model (e.g., a CPS anomaly detection model) explain its output? 2) How can data be used for model induction while preserving data privacy? 3) How can domain knowledge be incorporated into data-driven models? In this talk, I'll present some ongoing and past work on applications of AI to cyber-physical systems.

Bio: Manish Marwah is a principal research scientist in a research group at Micro Focus (until recently part of Hewlett Packard Labs). His main research interests are in the broad areas of AI and data science, especially in the context of cyber-physical systems, and applications to cyber-security. He has initiated and led research on designing data science methods for sustainability and energy management of smart buildings and data centers. Recently, he has been working on large-scale analytics and its applications to IoT and security domains. His research has led to over 65 refereed papers, several of which have won awards, including at AAAI, KDD, and IGCC. He has twice co-organized -- Data Mining for Sustainability (SustKDD), a workshop at KDD. He has been granted 43 patents (with several pending). Manish received a Ph.D. in Computer Science from the University of Colorado, Boulder, and a B.Tech. in Mechanical Engineering from the Indian Institute of Technology, Delhi.


12:00 PM - 1:00 PM


Break


1:00 PM - 3:00 PM

Contributed Paper Presentations Session 1

  • 1:00 PM - 1:15 PM: Jie Bu and Anuj Karpatne, "Quadratic Residual Networks: A New Class of Neural Networks for Solving Forward and Inverse Problems in Physics Involving PDEs", (Paper Link)

  • 1:15 PM - 1:30 PM: Mohannad Elhamod, Kelly Diamond, A. Murat Maga, Yasin Bakis, Henry L. Bart Jr., Paula Mabee, Wasila Dahdul, Jeremy Leipzig, Jane Greenberg, Brian Avants and Anuj Karpatne, "Hierarchy-guided Neural Networks for Species Classification", (Paper Link)

  • 1:30 PM - 1:45 PM: Max Zhu, Jacob Moss and Pietro Lio, "Modular Neural Ordinary Differential Equations" (Paper Link)

  • 1:45 PM - 2:00 PM: Gheorghe Tecuci, Dorin Marcu, Steven Mirsky and Alison Robertson, "Prediction-Driven Knowledge Discovery from Data and Prior Knowledge" (Paper Link)

  • 2:00 PM - 2:15 PM: Jay Mayfield, Adam Moses and Alisha Sharma,"Exploring the Echo State Approach to Modeling Stiff Chemical Kinetics" (Paper Link)

  • 2:15 PM - 2:30 PM: Julian Lee, Kamal Viswanath, Alisha Sharma, Jason Geder, Ravi Ramamurti and Marius Pruessner, "Data-Driven Approaches for Thrust Prediction in Underwater Flapping Fin Propulsion Systems" (Paper Link)

  • 2:30 PM - 2:45 PM: Rachel Cooper, Andrey A Popov and Adrian Sandu, "Investigation of Nonlinear Model Order Reduction of the Quasigeostrophic Equations through a Physics-Informed Convolutional Autoencoder" (Paper Link)

2:45 PM - 3:00 PM: Author Discussion Panel



3:00 PM - 3:35 PM


Break


3:35 PM - 4:00 PM

Invited Talk by Anu Myne, MIT Lincoln Labs, “Knowledge-informed AI for National Security”

Abstract: The state of artificial intelligence technology has a rich history that dates back decades and includes two fall-outs before the explosive resurgence of today, which is credited largely to data-driven learning approaches. While AI technology has and continues to become increasingly mainstream with impact across domains and industries, it's not without several drawbacks, weaknesses, and potential to cause undesired effects. The catalogue of AI approaches is vast with many techniques and variants, but a simple classifier separates approaches that are primarily, and often solely, data-driven (leveraging little to no knowledge), from those that do leverage knowledge. Within the national security domain, purely data-driven learning approaches can particularly lead to serious unwanted consequences, and furthermore, there is ample scientific and domain-specific knowledge that can be leveraged to advance knowledge-informed AI for problems of national importance. This report shares findings from a thorough exploration of AI approaches that exploit both data and knowledge (a.k.a. knowledge-informed AI). Specifically, we review illuminating examples of knowledge-informed deep learning and knowledge-informed reinforcement learning for the performance gains they provide (quantified and qualified) to summarize the current state-of-the-art. We also discuss an apparent trade space across variants of knowledge-informed AI, along with observed and prominent issues that suggest worthwhile future research directions. Most importantly, this report suggests how the advantages of knowledge-informed AI, specifically knowledge-informed deep learning and reinforcement learning, stands to benefit the national security domain.

Bio: Anu Myne is currently serving as an associate to the chief technology officer at MIT Lincoln Laboratory. She received her B.S. in electrical engineering from Worcester Polytechnic Institute, joined the Laboratory in 2006, and pursued her M.S. degree from Northeastern University while working. For several years of her career, she focused heavily in the areas of radar, radar-spoofing, and how to effectively test these technologies. She then shifted focus to applying AI within the same national security problem areas – she developed AI for intelligent test and evaluation using classical machine learning, as well as image classification algorithms using modern AI techniques. She stepped into her current role in 2018 and began focusing on AI and all its applications across the Laboratory, as well as ethical AI, and the vast and growing sub-field of what some refer to as informed AI. From her first time meddling with the more modern, deep learning techniques, Anu thought about how integrating principled knowledge into these learning pipelines could speed up learning, require fewer resources, and lead to breakthroughs faster. Knowledge-integrated informed AI is now an important strategic direction at the Laboratory that she hopes to expand to bring far-reaching impact to the broader set of national security problems and solutions.


4:00 PM - 5:00 PM

Panel Discussion: Role of AI in Advancing Science: Best Practices and Opportunities

Panelists: Peter Battaglia, Kieron Burke, Youzuo Lin, Michael Mahoney, Anu Myne, and Rahul Rai

Panel Questions:

  1. What are some of the biggest advantages of using AI for knowledge discovery in scientific problems?

  2. What are some examples of scientific problems where science-guided AI has found success (or is beginning to show promise) in discovering new patterns, theories, and relationships?

  3. Are there any emerging scientific problems where AI methods have not been fully utilized but hold great potential?

  4. What are some of the biggest challenges in applying current standards of AI in scientific problems, and what are some promising directions of research to address them?


5:00 PM - 5:30 PM


Breakout Session 1

Day 2: Nov 5, 2021


9:00 AM - 9:45 AM

Keynote Talk by Christopher Rackauckas, MIT, "The Continuing Advances of Differentiable Simulation"

Abstract: Differentiable simulation techniques are the core of scientific machine learning methods which are used in the automatic discovery of mechanistic models through infusing neural network training into the simulation process. In this talk we will start by showcasing some of the ways that differentiable simulation is being used, from discovery of extrapolatory epidemic models to nonlinear mixed effects models in pharmacology. From there, we will discuss the computational techniques behind the training process, focusing on the numerical issues involved in handling differentiation of highly stiff and chaotic systems. The viewers will leave with an understanding of how compiler techniques are being infused into the simulation stack to provide the future of differentiable simulators.

Bio: Christopher Rackauckas is an Applied Mathematics Instructor at MIT, the Director of Modeling and Simulation at Julia Computing and Creator / Lead Developer of JuliaSim, Director of Scientific Research at Pumas-AI and Creator / Lead Developer of Pumas, and Lead Developer of the SciML Open Source Software Organization. His research and software is focused on Scientific Machine Learning (SciML). His recent work is focused on bringing personalized medicine to standard medical practice through the proliferation of software for scientific AI. He is the lead developer of the DifferentialEquations.jl solver suite along with over a hundred other Julia packages, earning him the inaugural Julia Community Prize, and front page features on many tech community sites. His work on high performance differential equation solving is the engine accelerating many applications from the MIT-CalTech CLiMA climate modeling initiative to the SIAM Dynamical Systems award winning DynamicalSystems.jl toolbox. He received the United States Department of the Air Force Artificial Intelligence Accelerator Scientific Excellence Award.


9:45 AM - 10:10 AM

Invited Talk by Kieron Burke, UC Irvine, "Using AI to Find Better Density Functionals"

Abstract: Electronic structure calculations are used throughout chemistry and materials science to find new drugs and materials. About one-third of US supercomputing power is now spent on this task (more than on climate change). But all such calculations rely on density functional theory, which requires approximating the electronic energy. Today's human-designed functionals yield useful accuracy, but have many limitations and flaws. A race is on to use machine learning to find better density functionals. I will discuss some of the latest results in this quest.

Learning to Approximate Density Functionals Kalita, Bhupalee, Li, Li, McCarty, Ryan J. and Burke, Kieron, Accounts of Chemical Research 54, 818-826 (2021)


Bio: Kieron Burke is a professor in both the chemistry and physics departments at UC Irvine. His research focusses on developing a theory of quantum mechanics called density functional theory. Prof Burke works on developing all aspects of DFT: formalism, extensions to new areas, new approximations, and simplifications. His work is heavily used in materials science, chemistry, matter under extreme conditions (such as planetary interiors or fusion reactors), magnetic materials, molecular electronics, and so on. He has given talks in theoretical chemistry, condensed matter physics, applied mathematics, computer science, and even organic chemistry. Prof Burke is a Distinguished Professor of UC Irvine. He is also a fellow of the American Physical Society, the British Royal Society for Chemistry, and the American Association for the Advancement of Science, and a member of the International Academy of Quantum Molecular Sciences. He is known around the world for his many educational and outreach activities. According to google scholar, his research papers are now cited almost 20,000 times each year.


10:10 AM - 10:35 AM


Invited Talk by Xiaowei Jia, University of Pittsburgh, "Integrating Physical Simulations into Machine Learning for Modeling Aquatic Systems"

Abstract: Physics-based models are widely used to simulate water temperature and streamflow. Although they are built based on general physical laws, these models often produce biased simulations due to inaccurate parameterizations or approximations used to represent the true physics. Moreover, existing approaches are not designed for capturing the impact of human infrastructures, such as dams and reservoirs. Machine learning models often achieve better accuracy for well-observed streams and lakes, but perform poorly when adapted to poorly-observed locations. In this presentation, we introduce two related research tasks for advancing data-driven approaches for modeling stream networks by leveraging simulated data produced by physics-based models. First, we build a new data-driven framework to monitor dynamical systems by extracting general scientific knowledge embodied in simulation data generated by the physics-based model. To handle the bias in simulation data caused by imperfect parameterization, we propose to extract general physical relationships jointly from multiple sets of simulations generated by a physics-based model under different physical parameters, and then fine-tune the model using limited observation data via a constrastive learning process. This method not only produce better predictions, but also provide insights about the variation of physical parameters over space and time. Second, we propose a new data-driven method for modeling the impact of reservoirs on stream water temperature. A pseudo-perspective learning method is proposed to mimic the water managers' release decision, which effectively handles reservoirs without available release information. We also extract water flow patterns using physical simulations of reservoirs and transfer such knowledge to guide the learning process for the entire stream networks.

Bio: Xiaowei Jia is an Assistant Professor in the Department of Computer Science at the University of Pittsburgh. He obtained his Ph.D. degree at the University of Minnesota, under the supervision of Prof. Vipin Kumar. Prior to that, he received his M.S. degree from State University of New York at Buffalo and his B.S. degree from University of Science and Technology of China. His research interests include spatio-temporal data mining, physics-guided data science, and deep learning. His research has been published in major journals in data mining (e.g., TKDE) and scientific journals, as well as top-tier conferences (e.g., SIGKDD, ICDM, SDM, and CIKM). Jia was the recipient of UMN Doctoral Dissertation Fellowship (2019), the Best Applied Data Science Paper Award in SDM 21, the Best Conference Paper Award in ASONAM 16, and the Best Student Paper Award in BIBE 14.


10:35 AM - 11:00 AM


Break


11:00 AM - 11:25 AM


Invited Talk by Rahul Rai, Clemson University, "Driven by Data or Derived through Physics? Hybrid Physics Guided Machine Learning Approach"

Abstract: A multitude of physical systems applications including design, control, diagnosis, prognostics, and host of other problems are predicated on the assumption of model availability. There are mainly two approaches to modeling: Physics/Equation based modeling (Model-Based, MB) and Machine Learning (ML). Model-based methods assume the availability of an accurate system model while data-driven methods are based on machine learning. Purely data-driven ML methods ignore any knowledge about the physical/abstract systems. Additionally, ML approaches require a large amount of labeled training data that is typically unavailable. MB approaches require excellent physics models and good specification of parameters values. When building models of complex systems, we are often limited by the unavailability of the parameters of the system components due to incomplete technical specifications, hidden physical interactions or interactions that are too complex to model from first principles. Hence, we often make simplifying assumptions (e.g., linear approximations) and construct coarse models that imperfectly describe the behavior of the real system. A prudent approach is to use hybrid methods that use the physics of system and prior knowledge about the domain to guide construction of machine learning techniques such as Deep Neural Networks (DNNs). The principal goal of this talk is to discuss challenges related to the development of hybrid methods that combine Multi-physics equation-based models with data-driven machine learning models (such as DNNs) to enable predictive modeling of complex systems in the presence of imperfect models and sparse and noisy data. I will discuss connections to larger problems in the associated area and present specific results related to the development of novel hybrid methods.

Bio: Dr. Rahul Rai joined the Department of Automotive Engineering in 2020 as Dean’s Distinguished Professor in the Clemson University International Center for Automotive Research (CU-ICAR). He directs the Geometric Reasoning and Artificial Intelligence Lab (GRAIL, which is located at both CU-ICAR and Center for Manufacturing Innovation (CMI). Previously, he served on the Mechanical and Aerospace Engineering faculty at the University at Buffalo-SUNY (2012-2020). Dr. Rai also has industrial research center experiences at United Technology Research Center (UTRC) and Palo Alto Research Center (PARC). Dr. Rai received his B.Tech. degree in 2000 and M.S. degree in 2002 in Manufacturing Engineering from the National Institute of Foundry and Forge Technology (NIFFT), Ranchi, India, and Missouri University of Science and Technology (Missouri S&T) USA, respectively. He earned his doctoral degree in Mechanical Engineering from The University of Texas at Austin USA in 2006. Dr. Rai’s research is focused on developing computational tools for Manufacturing, Cyber-Physical System (CPS) Design, Autonomy, Collaborative Human-Technology Systems, Diagnostics and Prognostics, and Extended Reality (XR) domains. By combining engineering innovations with methods from machine learning, AI, statistics and optimization, and geometric reasoning, his research strives to solve important problems in the above-mentioned domains. His research has been supported by NSF, DARPA, ONR, ARL, NSWC, DMDII, CESMII, HP, NYSERDA, and NYSPII (funding totaling more than $20M as PI/Co-PI). He has authored over 100 papers to date in peer-reviewed conferences and journals covering a wide array of problems. Dr. Rai is the recipient of numerous awards, including the 2009 HP Design Innovation, 2017 ASME IDETC/CIE Young Engineer Award, and 2019 PHM society conference best paper award. Additionally, Dr. Rai is Associate Editor of the International Journal of Production Research and ASME Journal of Computing and Information Science in Engineering (JCISE) journals and has taken significant leadership roles within the ASME Computers and Information in Engineering professional society.


11:25 AM - 12:10 PM

Keynote Talk by Tanya Berger-Wolf, Ohio State University, "Imageomics: Images as the Source of Information about Life"

Abstract: Introducing the new field of imageomics: from images to biological traits using biology-structured machine learning. Images are the most abundant, readily available source for documenting life on the planet. Coming from natural history collections, laboratory scans, field studies, camera traps, wildlife surveys, autonomous vehicles on the land, water, and in the air, as well as tourists’ cameras, citizen scientists’ platforms, and posts on social media, there are millions of images of living organisms. But their power is yet to be harnessed for science and conservation. Even the traits of organisms cannot be readily extracted from images. The analysis of traits, the integrated products of genes and environment, is critical for biologists to predict effects of environmental change or genetic manipulation and to understand the significance of patterns in the two billion year evolutionary history of life. I will show how data science and machine learning can turn massive collections of images into high resolution information database about wildlife, enabling scientific inquiry, conservation, and policy decisions. I will share our vision of the new scientific field of imageomics.

Bio: Dr. Tanya Berger-Wolf is a Professor of Computer Science Engineering, Electrical and Computer Engineering, and Evolution, Ecology, and Organismal Biology at the Ohio State University, where she is also the Director of the Translational Data Analytics Institute. Recently she has been awarded US National Science Foundation $15M grant to establish a new Harnessing Data Revolution Institute, founding a new field of study: Imageomics. As a computational ecologist, her research is at the unique intersection of computer science, wildlife biology, and social sciences. She creates computational solutions to address questions such as how environmental factors affect the behavior of social animals (humans included). Berger-Wolf is also a director and co-founder of the conservation software non-profit Wild Me, home of the Wildbook project, which brings together computer vision, crowdsourcing, and conservation. It has been featured in media, including Forbes, The New York Times, CNN, National Geographic, and most recently The Economist. Berger-Wolf has given hundreds of talks about her work, including at TEDx and UN/UNESCO AI for the Planet. Prior to coming to OSU in January 2020, Berger-Wolf was at the University of Illinois at Chicago. She has received numerous awards for her research and mentoring.


12:10 PM - 12:35 PM


Invited Talk by Charuleka Varadharajan, Lawrence Berkeley National Lab, "Multi-scale machine learning models to predict impacts of extreme events on stream temperature"

Abstract: Extreme events such as floods, droughts and heatwaves are projected to increase due to climate change resulting in impacts to stream water availability and quality. In this talk, I will describe our research activities building machine learning models for predicting stream temperatures to determine the impacts of climatic disturbances. The models need to account for the unpredictable timing, duration and spatial extent of extreme events, by generalizing to unmonitored locations, and predicting temperatures on both short and long timescales. We use low-complexity machine learning models (Support Vector Regression, and XGBoost) and deep learning models (LSTMs) to predict monthly and daily stream water temperature at local to regional scales and incorporate process knowledge from heat budget models for selection of inputs and scaling attributes. We demonstrate the application of these models for the mid-Atlantic and Pacific Northwest hydrological basins with differing climate, geological, land use and water management attributes.

Bio: Charuleka Varadharajan is a Research Scientist at Lawrence Berkeley National Lab. As a biogeochemist and environmental data scientist, she is interested in the water, energy and carbon nexus to understand and limit the impacts of human activities on water resources and climate. Her research has previously involved studying the fate, transport and mitigation of contaminants in groundwater; measurement and prediction of carbon fluxes in terrestrial and subsurface environments; and management, synthesis, and analysis of diverse multi-scale environmental datasets. Her expertise spans various techniques for data collection and analysis, including laboratory experiments; x-ray synchrotron spectroscopy; sensor-based field data collection; web-based software to integrate distributed datasets in real-time; and the use of geoinformatics, statistical, and wavelet-based data processing to analyze high spatial and temporal resolution data. She is currently interested in enhancing LBNL’s environmental data capabilities towards building an environmental knowledgebase, in partnership with the Computational Research Division.


12:35 PM - 1:35 PM


Break


1:45 PM - 3:45 PM

Contributed Paper Presentations Session 2

  • 1:45 PM - 2:00 PM: Petar Griggs, Lin Li and Rajmonda Caceres, "Unified GNN Architecture Design for High-Throughput Material Screening" (Paper Link)

  • 2:00 PM - 2:15 PM: Shanwu Li and Yongchao Yang, "A physics-integrated deep learning framework for discovering reduced-order models of nonlinear dynamical systems" (Paper Link)

  • 2:15 PM - 2:30 PM: Yong Zhao, Edirisuriya Md Siriwardane and Jianjun Hu, "Physics guided deep learning generative models for crystal materials discovery" (Paper Link)

  • 2:30 PM - 2:45 PM: Jiequn Han, Xu-Hui Zhou and Heng Xiao, "Equivariant Vector-Cloud Neural Networks for Modeling Constitutive Tensor Transport PDEs" (Paper Link)

  • 2:45 PM - 3:00 PM: Rahul Ghosh, Arvind Renganathan, Ankush Khandelwal, Xiaowei Jia, Xiang Li, John Nieber, Christopher Duffy and Vipin Kumar, "Knowledge-guided Self-supervised Learning for estimating River-Basin Characteristics" (Paper Link)

  • 3:00 PM - 3:15 PM: Arka Daw, M. Maruf and Anuj Karpatne, "PID-GAN: A GAN Framework based on a Physics-informed Discriminator for Uncertainty Quantification with Physics" (Paper Link)

  • 3:15 PM - 3:30 PM: Nikhil Muralidhar, Jie Bu, Ze Cao, Neil Raj, Naren Ramakrishnan, Danesh Tafti and Anuj Karpatne, "PhyFlow: Physics-Guided Deep Learning for Generating Interpretable 3D Flow Fields" (Paper Link)

3:30 PM - 3:45 PM: Author Discussion Panel



3:45 PM - 4:00 PM


Break


4:00 PM - 5:00 PM

Panel Discussion: Opportunities and Challenges in using Scientific Knowledge to Guide AI

Panelists: Tanya Berger-Wolf, Youngsoo Choi, Forrest Hoffman, Paris Perdikaris, Chris Rackauckas, and Charuleka Varadharajan

Panel Questions:

  1. What are some of the biggest gaps in applying “black-box” AI methods (that are trained solely using data) in scientific problems?

  2. What are some promising examples of strategies for guiding (or informing) AI methods using scientific knowledge?

  3. Are there some emerging scientific problems where science-guided AI methods have not been explored but hold great potential?

  4. Where do you see the future of the growing field of science-guided AI and what are some of your thoughts on how we can get there?


5:00 PM - 5:30 PM


Breakout Session 2

Day 3: Nov 6, 2021


9:00 AM - 9:45 AM

Keynote Talk by Paris Perdikaris, UPenn, "Rapid PDE-Constrained Optimization via Self-Supervised Operator Learning: Applications in Design and Optimal Control"

Abstract: Design and optimal control problems are among the fundamental, ubiquitous tasks we face in science and engineering. In both cases, we aim to represent and optimize an unknown (black-box) function that associates a performance/outcome to a set of controllable variables through an experiment. In cases where the experimental dynamics can be described by partial differential equations (PDEs), such problems can be mathematically translated into PDE-constrained optimization tasks, which quickly become intractable as the number of control variables and/or the cost of experiments increases. In this talk we will introduce physics-informed deep operator networks (DeepONets); a self-supervised framework for learning the solution operator of parametric PDEs, even in the absence of labelled training data. We will demonstrate the effectiveness of DeepONets in rapidly predicting the full spatio-temporal solution of a PDE given previously unseen high-dimensional inputs, and illustrate how a trained DeepONet can be used as a differentiable surrogate for rapidly solving PDE-constrained optimization problems. Results will be presented for two canonical applications involving time-dependent optimal control of heat transfer, and drag minimization of obstacles in Stokes flow, showcasing significant speed ups compared to traditional adjoint solvers.

Bio: Paris Perdikaris is an Assistant Professor in the Department of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. He received his PhD in Applied Mathematics at Brown University in 2015, and, prior to joining Penn in 2018, he was a postdoctoral researcher at the department of Mechanical Engineering at the Massachusetts Institute of Technology working on physics-informed machine learning and design optimization under uncertainty. His work spans a wide range of areas in computational science and engineering, with a particular focus on the analysis and design of complex physical and biological systems using machine learning, stochastic modeling, computational mechanics, and high-performance computing. Current research thrusts include physics-informed machine learning, uncertainty quantification in deep learning, and engineering design optimization. His work and service has received several distinctions including the DOE Early Career Award (2018), the AFOSR Young Investigator Award (2019), the Ford Motor Company Award for Faculty Advising (2020), and the SIAG/CSE Early Career Prize (2021).


9:45 AM - 10:30 AM

Keynote Talk by Forrest M. Hoffman, ORNL , "Exploiting Artificial Intelligence for Advancing Earth and Environmental System Science"

Authors: Forrest M. Hoffman (1), Jitendra Kumar (1), Zachary L. Langford (1), Shashank Konduri (2), Nathan Collier (1), Min Xu (1)

1. Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA

2. NASA Goddard Space Flight Center, Greenbelt, Maryland, USA

Abstract: Earth and environmental science data encompass temporal scales of seconds to hundreds of years, and spatial scales of microns to tens of thousands of kilometers. Because of rapid technological advances in sensor development, computational capacity, and data storage density, the volume, velocity, complexity, and resolution of these data are rapidly increasing. Machine learning, data mining, and other approaches often referred to as artificial intelligence offer the promise for improved prediction and mechanistic understanding, and the path for fusing data from multiple sources into data-driven and hybrid models comprised of both process-based and deep learning elements. At the watershed scale, streamflow gauges and in situ measurements must be combined with near-surface, airborne, and satellite remote sensing data to understand the structure and function of ecosystems in heterogeneous landscapes; their interactions with nutrients, water, and energy; and ecohydrological responses to environmental change. However, sampling in remote, dangerous, or topographically complex watersheds is often prohibitive, necessitating use of sensor optimization and scaling techniques for characterizing landscape properties, vegetation distributions, and responses to climate and extreme weather events. A sampling of characterization studies and prediction approaches will be described, and strategies for applying a new generation of machine learning methods on high performance computing platforms to climate and environmental system science will be presented.

Bio: Forrest M. Hoffman is a Distinguished Computational Earth System Scientist and the Group Leader for the Computational Earth Sciences Group at Oak Ridge National Laboratory (ORNL). As a resident researcher in ORNL’s Climate Change Science Institute (CCSI) and a member of ORNL’s Computational Sciences & Engineering Division (CSED), Forrest develops and applies Earth system models (ESMs) to investigate the global carbon cycle and feedbacks between biogeochemical cycles and the climate system. He applies data mining methods using high performance computing to problems in landscape ecology, ecosystem modeling, remote sensing, and large-scale climate data analytics. He is particularly interested in applying machine learning methods to explore the influence of terrestrial and marine ecosystems on hydrology and climate. Forrest is also a Joint Faculty Member in the University of Tennessee’s Department of Civil & Environmental Engineering in nearby Knoxville, Tennessee.


10:30 AM - 11:00 AM


Break


11:00 AM - 11:25 AM


Invited
Talk by Youngsoo Choi, LLNL, "Reliable and generalizable data-driven physical simulations"

Abstract: A data-driven model can be built to accurately accelerate computationally expensive physical simulations, which is essential in multi-query problems, such as inverse problem, uncertainty quantification, design optimization, and optimal control. In this talk, two types of data-driven model order reduction techniques will be discussed, i.e., the black-box approach that incorporates only data and the physics-constrained approach that incorporates the first principle as well as data. The advantages and disadvantages of each method will be discussed. Several recent developments of generalizable and robust data-driven physics-constrained reduced order models will be demonstrated for various physical simulations as well. For example, a hyper-reduced time-windowing reduced order model overcomes the difficulty of advection-dominated shock propagation phenomenon, achieving a speed-up of O(20~100) with a relative error much less than 1% for Lagrangian hydrodynamics problems. The nonlinear manifold reduced order model also overcomes the challenges posed by the problems with Kolmogorov’s width decaying slowly by representing the solution field with a compact neural network decoder, i.e., nonlinear manifold. The space–time reduced order model accelerates a large-scale particle Boltzmann transport simulation by a factor of 2,700 with a relative error less than 1%. Furthermore, successful application of these reduced order models in design optimization problems will be presented. Finally, the library for reduced order models, i.e., libROM, and its webpage will be introduced, which is useful for educational as well as research purpose.

Bio: Youngsoo Choi is a computational math scientist in CASC under Computing directorate at LLNL. His research focuses on developing efficient reduced order models for various physical simulations for time-sensitive decision-making multi-query problems, such as inverse problems, design optimization, and uncertainty quantification. Together with his collaborators, he has developed various powerful model order reduction techniques, such as machine learning-based nonlinear manifold and space-time reduced order models for nonlinear dynamical systems. He has also developed the component-wise reduced order model optimization algorithm, which enables fast and accurate computational modeling tool for lattice-structure design. He is currently leading data-driven surrogate modeling development team for various physical simulations, with whom he developed the open source codes, libROM and LaghosROM. He is also involved with quantum computing research. He has earned his undergraduate degree for Civil and Environmental Engineering from Cornell University and his PhD degree for Computational and Mathematical Engineering from Stanford University. He was a postdoc at Sandia National Laboratories and Stanford University prior to joining LLNL in 2017.


11:25 AM - 12:40 PM

Contributed Paper Presentations Session 3

  • 11:25 AM - 11:40 AM: Homin Song and Yongchao Yang, "Hierarchical Multi-scale Deep Learning for Super-resolution Ultrasonic Array Imaging" (Paper Link)

  • 11:40 AM - 11:55 AM: Francisco Vargas, Pierre Thodoroff, Austen Lamacraft and Neil Lawrence, "Solving Schrodinger Bridges via Maximum Likelihood" (Paper Link)

  • 11:55 AM - 12:10 PM: Dhruv Patel, Jonghyun Lee, Mojtaba Forghani, Matthew Farthing, Tyler Hesser, Peter Kitanidis and Eric Darve, "Multi-Fidelity Hamiltonian Monte Carlo Method with Deep Learning-based Surrogate" (Paper Link)

  • 12:10 PM - 12:25 PM: Jacob Moss, Felix Opolka, Bianca Dumitrascu and Pietro Lio, "Approximate Latent Force Model Inference" (Paper Link)

12:25 PM - 12:40 PM: Author Discussion Panel



12:40 PM - 1:10 PM


Summary & Closing Remarks