Third Symposium on

Knowledge-guided ML

(KGML-AAAI-22)

Held as part of AAAI Fall Symposium Series (FSS) 2022

from November 17-19, 2022

Westin Arlington Gateway, Arlington, Virginia, USA

Zoom Link to join the symposium virtually:

Wifi details in the Symposium Room:

Wifi name: Westin_ARLINGTONMEETING

Password: Westin801

Registration for our symposium is open! Register using the link below:

Overview

Knowledge-guided Machine Learning (KGML) is an emerging paradigm of research that aims to integrate scientific knowledge in the design and learning of machine learning (ML) methods to produce ML solutions that are generalizable and scientifically consistent with established theories. KGML is ripe with research opportunities to influence fundamental advances in ML for accelerating scientific discovery and has already begun to gain attention in several branches of science including physics, chemistry, biology, fluid dynamics, and geoscience.

The goal of this symposium is to nurture the community of researchers working at the intersection of ML and scientific areas and shape the vision of the rapidly growing field of KGML. This symposium builds upon the success of the previous two symposiums organized on this topic in 2020 and 2021.

Program

We have an exciting line-up of 9 invited talks and 20 contributed paper presentations. See schedule below (all times are in Eastern Time zone)

Day 1 (Nov 17, 2022)

9 AM to 9.10 AM

Welcome and Introduction

9.10 AM to 9.50 AM

Invited Talk by Madhav Marathe,
Title:
Combining theory and data driven Models for Epidemic planning, response and decision making

Abstract: Infectious diseases cause more than 13 million deaths a year worldwide. Despite significant advances by scientists and public health authorities that have led to reduced rates of infections and mortality, we continue to find ourselves unable to respond rapidly and effectively to pandemics. The ongoing COVID-19 serves as a grim reminder of our collective inability to control pandemics. Globalization, anti-microbial resistance, urbanization, climate change, social media and ecological pressures threaten to upend the progress we have made in fighting infectious diseases. Pandemics will happen again: it is not if but when. In this lecture, we will argue that pandemics is a complex systems problem that is intricately tied to the social, behavioral, political and economic issues that go beyond human health. We will give an overview of the state of the art in real-time computational epidemiology. Then using COVID-19 as an exemplar, we will describe how scalable computing, AI and data science can play an important role in advancing real-time epidemic science. Computational challenges and directions for future research will be discussed.

Bio: Madhav Marathe is a Distinguished Professor in Biocomplexity, the division director of the Network Systems Science and Advanced Computing Division at the Biocomplexity Institute and Initiative, and a Professor in the Department of Computer Science at the University of Virginia. His research interests are in network science, Sustainable habitats, AI, foundations of computing and high performance computing. Over the last 20 years, his division has supported federal and state authorities in their effort to respond to a number of problems arising in the context of national security, sustainability and pandemic science, including, the COVID-19 pandemic. Before joining UVA, he held positions at Virginia Tech and the Los Alamos National Laboratory. He is a Fellow of the IEEE, ACM, SIAM and AAAS.

9.50 AM to 10.26 AM

Contributed Paper Presentation Session 1

  • 9.50 AM - 10.08 AM: Jingyuan Chou, Jiangzhuo Chen, and Madhav Marathe, "Estimate Causal Effects of Public Health Policies in COVID-19 Pandemic", (Paper Link)

  • 10.08 AM - 10.26 AM: Alessandro Oltramari, Anees Ul Mehdi, and Andreas Birkefeld, "Assessing Emission Calibration Projects with Hybrid AI-based Support", (Paper Link)

10.26 AM to 11 AM

Break

11 AM to 11.40 AM

Invited Talk by Inanc Senocak,
Title:
Physics and Equality Constrained Artificial Neural Networks for Learning the Solution of Partial Differential Equations

Abstract: Artificial neural networks can be trained to learn the solution of partial differential equations (PDE) by using the residual form of the PDEs, their boundary conditions, and any data that may be available from measurements. As such, learning the solution of a PDE with neural networks can be viewed as a meshless method in which parameters of the neural network are optimized based on a governing equation along with its boundary conditions. Aside from the optimization algorithm used to minimize an objective function, formulation of the optimization problem at hand has received less attention. Physics-informed neural networks (PINNs) are typically trained using a composite objective function, which is a weighted sum of the residuals of a governing partial differential equation (PDE) and its boundary conditions. A major drawback of this approach is that boundary conditions are not properly used to constrain the solution and the weighting factors that appear in the objective function are problem specific and not known a priori. To address these shortcomings in a principled fashion, we pursue an equality constraint optimization formulation in which we use boundary conditions and any high-fidelity data to constrain the PDE loss. We then solve the constrained optimization problem as an unconstrained optimization problem using the augmented Lagrangian method (ALM). We show through various examples of forward and inverse problems that our proposed method leads to marked improvements in relative error of learned solutions. Furthermore, we present practical strategies to make the method more efficient to learn non-smooth solutions.

Bio: Inanc Senocak (E-nahnch Sheh-no-chak) is a William Kepler Whiteford Faculty Fellow and an associate professor of mechanical engineering at the University of Pittsburgh. He obtained his PhD degree in aerospace engineering from the University of Florida and his B.Sc. degree in mechanical engineering from the Middle East Technical University in Ankara, Turkey. He worked as a postdoctoral researcher at the Stanford University and the Los Alamos National Laboratory prior to starting his faculty career at Boise State University in 2007. He is a fellow of the American Society of Mechanical Engineers (ASME), an associate fellow of the American Institute of Aeronautics and Astronautics (AIAA), and a past recipient of a CAREER Award from the National Science Foundation.

11:40 AM to 12.16 PM

Contributed Paper Presentation Session 2

  • 11.40 AM - 11.58 AM: Nikhil Muralidhar, Nicholas Lubbers, MohAMed Mehana, Naren Ramakrishnan, and Anuj Karpatne, "Alleviating Data Paucity with Knowledge-Guided Transfer Learning: An Application in Subsurface Modeling", (Paper Link)

  • 11.58 AM - 12.16 PM: Jennifer Sleeman, David Chung, Chace Ashcraft, Jay Brett, Anand Gnanadesikan, Yannis Kevrekidis, Thomas Haine, Marie-Aude Pradal, Larry White, Renske Gelderloos, Caroline Tang, Anshu Saksena, and Marisa Hughes, "Using Artificial Intelligence to Aid Scientific Discovery of Climate Tipping Points", (Paper Link)

12.16 PM to 2 PM

Lunch Break

2 PM to 2.40 PM

Invited Talk by Omar Ghattas,
Title:
Geometric Deep Neural Network Surrogates for Bayesian Inverse Problems (on Zoom)

Abstract: Bayesian inverse problems (BIPs) governed by large-scale complex models (such as PDEs) in high or infinite parameter dimensions are often intractable. Efficient evaluation of the parameter-to-observable (PtO) map, which involves solution of the forward model, is key to making BIPs tractable. Surrogate approximations of PtO maps have the potential to greatly accelerate solution of BIPs, provided an accurate surrogate can be trained with modest numbers of model solves. Unfortunately, constructing such surrogates presents significant challenges when the parameter dimension is high and the forward model is expensive. Deep neural networks (DNNs) have emerged as leading contenders for overcoming these challenges. We demonstrate that black box application of DNNs for problems with infinite dimensional parameter fields leads to poor results when training data are limited due to the expense of the model. However, by constructing a network architecture that exploits the geometry of the PtO map -- in particular its smoothness, anisotropy, and intrinsic low-dimensionality -- as revealed through adjoint-PDE-based Gauss-Newton Hessians, one can construct a dimension-independent "reduced basis" DNN surrogate with superior generalization properties using only limited training data. We employ this reduced basis DNN surrogate to make tractable Bayesian optimal experimental design (which subsume BIPs), in particular for finding sensor locations that maximize the expected information gain from the data. Application to inverse wave scattering is presented. This work is joint with Tom O'Leary-Roseberry, Keyi Wu, and Peng Chen.

Bio: Dr. Omar Ghattas is Professor of Mechanical Engineering at The University of Texas at Austin and holds the Fletcher Stuckey Pratt Chair in Engineering. He is also the Director of the OPTIMUS (OPTimization, Inverse problems, Machine learning, and Uncertainty quantification for complex Systems) Center in the Oden Institute for Computational Engineering and Sciences. He is a member of the faculty in the Computational Science, Engineering, and Mathematics (CSEM) interdisciplinary PhD program in the Oden Institute, and holds courtesy appointments in Geological Sciences, Computer Science, and Biomedical Engineering. With collaborators, he received the ACM Gordon Bell Prize in 2003 (for Special Achievement) and again in 2015 (for Scalability), and was a finalist for the 2008, 2010, and 2012 Bell Prizes. He received the 2019 SIAM Computational Science & Engineering Best Paper Prize, and the 2019 SIAM Geosciences Career Prize. He is a Fellow of the Society for Industrial and Applied Mathematics (SIAM) and serves on the National Academies Committee on Applied and Theoretical Statistics. Ghattas's research focuses on advanced mathematical, computational, and statistical theory and algorithms for large-scale inverse and optimization problems governed by models of complex engineered and natural systems.

2.40 PM to 3.34 PM

Contributed Paper Presentation Session 3

  • 2.40 PM - 2.58 AM: Pravin Bhasme and Udit Bhatia, "Augmenting Long Short Term Memory Processes with Physics Informed Memory in the Hydrological Processes for Improved Predictability and Interpretability", (Paper Link) (on Zoom)

  • 2.58 PM - 3.16 PM: Sangeeta Srivastava, Samuel Olin, Viktor Podolskiy, Anuj Karpatne, Wei-Cheng Lee, and Anish Arora, "Physics-Guided Problem Decomposition for Scaling Deep Learning of High-dimensional Eigen-Solvers: The Case of Schrodinger's Equation", (Paper Link)

  • 3.16 PM - 3.34 PM: Abantika Ghosh, Mohannad ElhAMod, Jie Bu, Wei-Cheng Lee, Anuj Karpatne, and Viktor Podolskiy, "Physics-Informed Machine Learning for Optical Modes in Composites", (Paper Link)

3.34 PM to 4 PM

Break

4 PM to 4.40 PM

Invited Talk by Ranga Raju Vatsavai,
Title:
Knowledge Guided Deep Inferencing Framework for Cloud Removal from Multi-sensor Optical Remote Sensing Imagery

Abstract: Remote sensing data is a prime example of spatial big data. NASA recently collected its 10 millionth Landsat image. Coarse-resolution (30 m) Landsat collection itself tops a petabyte, whereas private satellite data producer, MAXAR holds more than 125 petabytes of high-resolution imagery. However, since more than 50% of Earth's surface is covered by clouds at any time, the performance of various downstream tasks such as segmentation, recognition, and classification on remote sensing images could be seriously affected because of the cloud-contaminated pixels. In this talk, I will discuss recent advances in deep learning-based inferencing for reconstructing cloud-contaminated regions, including our recent work on dealing with multi-resolution and multi-sensor challenges. In particular, I will show how knowledge-guided deep learning is useful in harmonizing multi-sensor spectral information. Finally, I will present some open research challenges.

Bio: Raju Vatsavai is a professor of computer science and associate director of the Center for Geospatial Analytics at North Carolina State University. Raju works at the intersection of spatial and temporal big data management and analytics, GeoAI, and high-performance computing with applications in national security, geospatial intelligence, natural resources, climate change, location-based services, healthcare, and human terrain mapping. He worked at many leading research laboratories: ORNL, IBM-Research, University of Minnesota, AT&T Labs, and CDAC-India. He has published more than 100 peer-reviewed articles in leading conferences and journals and holds MS and Ph.D. degrees in computer science from the University of Minnesota.

4.40 PM to 5.40 PM

Panel Discussion: Opportunities and Challenges for KGML in Science

Panelists: Inanc Senocak, Ranga Raju Vatsavai, and Paris Perdikaris

Panel Questions:

1. What are some of the biggest advantages of using KGML methods compared to black-box ML models in scientific problems?

2. Which scientific problems are particularly suited for KGML applications? Where do you see this field making the biggest impact?

3. Are there some scientific problems where KGML methods have not been fully explored but hold great potential?

4. What are some of the gaps in the current state of KGML methods and what are your thoughts on addressing them?

Day 2 (Nov 18, 2022)

9 AM to 9.40 AM

Invited Talk by Youzuo Lin,
Title:
Physics-guided Data-driven Computational Seismic Imaging: Shifting Paradigm from Supervised Very Deep Networks to Unsupervised Lightweight Models

Abstract: The goal of seismic imaging is to obtain subsurface properties from surface measurements. Seismic images have proven valuable, even crucial, for a variety of applications, including subsurface energy exploration, earthquake early warning, carbon capture and sequestration, estimating pathways of sub-surface contaminant transport, etc. Like most inverse problems, seismic imaging is ill-posed, meaning many different subsurface configurations can give rise to the same surface measurements. Iterative optimization algorithms for the inverse problem are typically very computationally expensive because they require many evaluations of the forward model, which is itself computationally expensive. A further challenge is the different sensitivity of subsurface properties to the seismic data; density for example is more difficult to accurately infer than P-wave velocity. Recently, machine-learning-based computational methods have been pursued in the context of scientific computational imaging problems. Some success has been attained when an abundance of simulations and labels are available. Nevertheless, seismic inversion is not a data-rich domain. There is a relatively small amount of field data in existence due to the high cost of acquisition, and as a result of its commercial value, a very limited amount is publicly available. In this talk, I will explore our recent R&D efforts to alleviate data scarcity and lack-of-label issues and to further improve model generalization using underlying physics information. A series of numerical experiments are conducted using datasets from synthetic simulations to field applications to evaluate the effectiveness of our techniques.

Bio: Youzuo Lin is a Computational Scientist at the Earth and Environmental Sciences Division of Los Alamos National Laboratory. Before joining as a staff scientist at LANL, he completed his Ph.D. in Applied and Computational Mathematics at Arizona State University. His current research focuses on physics-informed machine learning, deep learning, computational methods, and their applications in computational imaging, signal and image analysis. Specifically, he has worked on subsurface imaging for energy exploration, medical imaging and cancer detection, and time series classification for small earthquake detection.

9.40 AM to 10.34 AM

Contributed Paper Presentation Session 4

  • 9.40 AM - 9.58 AM: Arka Daw, Jie Bu, Sifan Wang, Paris Perdikaris, and Anuj Karpatne, "Rethinking the Importance of Sampling in Physics-informed Neural Networks", (Paper Link)

  • 9.58 AM - 10.16 AM: Alexander New, Benjamin Eng, Andrea Timm, and Andrew Gearhart, "Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural Networks on Coupled Ordinary Differential Equations", (Paper Link)

  • 10.16 AM - 10.34 AM: Jason Harman and Jaelle Schscheuerman, "Multi-Criteria Comparison as a Method of Advancing Knowledge-Guided Machine Learning", (Paper Link)

10.34 AM to 11 AM

Break

11 AM to 11.40 AM

Invited Talk by Paris Perdikaris,
Title:
Self-supervised learning of PDE solution manifolds in function spaces

Abstract: While the great success of modern deep learning lies in its ability to approximate maps between finite-dimensional vector spaces, many tasks in science and engineering involve continuous measurements that are functional in nature. For exAMple, in climate modeling one might wish to predict the pressure field over the earth from measurements of the surface air temperature field. The goal is then to learn an operator, between the space of temperature functions to the space of pressure functions. In recent years operator learning techniques using deep neural networks have emerged as a powerful tool for regression problems in infinite-dimensional function spaces. In this talk we present a general approximation frAMework for neural operators and demonstrate that their performance fundAMentally depends on their ability to learn low-dimensional parAMeterizations of solution manifolds. This motivates new architectures which are able to capture intrinsic low-dimensional structure in the space of target output functions. Additionally, we provide a way to train these models in a self-supervised manner, even in the absence of paired labeled exAMples. These contributions result in neural PDE solvers which yield fast and discretization invariant predictions of spatio-temporal fields up to three orders of magnitude faster compared to classical numerical solvers. We will also discuss key open questions related to generalization, accuracy, data-efficiency and inductive bias, the resolution of which will be critical for the success of AI in science and engineering.

Bio: Dr. Paris Perdikaris is an Assistant Professor in the Department of Mechanical Engineering and Applied Mechanics at the University of Pennsylvania. He received his PhD in Applied Mathematics at Brown University in 2015, and, prior to joining Penn in 2018, he was a postdoctoral researcher at the department of Mechanical Engineering at the Massachusetts Institute of Technology working on physics-informed machine learning and design optimization under uncertainty. His work spans a wide range of areas in computational science and engineering, with a particular focus on the analysis and design of complex physical and biological systems using machine learning, stochastic modeling, computational mechanics, and high-performance computing. Current research thrusts include physics-informed machine learning, uncertainty quantification in deep learning, and engineering design optimization. His work and service has received several distinctions including the DOE Early Career Award (2018), the AFOSR Young Investigator Award (2019), the Ford Motor Company Award for Faculty Advising (2020), and the SIAG/CSE Early Career Prize (2021).

11:40 AM to 12.20 PM

Invited Talk by Lu Lu,
Title:
Multifidelity deep neural operators for efficient learning of partial differential equations with application to fast inverse design of nanoscale heat transport

Abstract: Deep neural operators can learn operators mapping between infinite-dimensional function spaces via deep neural networks and have become an emerging paradigm of scientific machine learning. However, training neural operators usually requires a large AMount of high-fidelity data, which is often difficult to obtain in real engineering problems. Here we address this challenge by using multifidelity learning, i.e., learning from multifidelity data sets. We develop a multifidelity neural operator based on a deep operator network (DeepONet). A multifidelity DeepONet includes two standard DeepONets coupled by residual learning and input augmentation. Multifidelity DeepONet significantly reduces the required AMount of high-fidelity data and achieves one order of magnitude smaller error when using the sAMe AMount of high-fidelity data. We apply a multifidelity DeepONet to learn the phonon Boltzmann transport equation (BTE), a frAMework to compute nanoscale heat transport. By combining a trained multifidelity DeepONet with genetic algorithm or topology optimization, we demonstrate a fast solver for the inverse design of BTE problems.

Bio: Dr. Lu Lu is an Assistant Professor in the Department of Chemical and Biomolecular Engineering at University of Pennsylvania. The goal of Lu Group’s research is to model and simulate physical and biological systems at different scales by integrating modeling, simulation, and machine learning, and to provide strategies for system learning, prediction, optimization, and decision making in real time. His current research interest lies in scientific machine learning, including theory, algorithms, software, and its applications to engineering, physical, and biological problems. His broad erresearch interests focus on multiscale modeling and high performance computing for physical and biological systems.

12.20 PM to 2 PM

Lunch Break

2 PM to 3.30 PM

Contributed Paper Presentation Session 5

  • 2 PM - 2.18 PM: Jostein Barry-Straume, Arash Sarshar, Andrey A. Popov, and Adrian Sandu, "Physics-informed neural networks for PDE-constrained optimization and control", (Paper Link) (on Zoom)

  • 2.18 PM - 2.36 PM: Rachel Cooper and Adrian Sandu, "Basis-Agnostic Polynomial Chaos Expansions via a Modified Neural Network Architecture", (Paper Link)

  • 2.36 PM - 2.54 PM: Leon Liu and Yiqiao Yin, "Towards Explainable AI on Chest X-ray Diagnosis using Image Segmentation and CAM Visualization", (Paper Link)

  • 2.54 PM - 3.12 PM: Benjamin DiPrete, Rao Garimella, Cristina Garcia Cardona, and Navamita Ray, "Reinforcement Learning for Block Decomposition of CAD Models", (Paper Link)

  • 3.12 PM - 3.30 PM: Paul Atzberger, "MLMOD Package: Machine Learning Methods for Data-Driven Modeling in LAMMPS", (Paper Link)

3.30 PM to 4 PM

Break

4 PM to 4.40 PM

Invited Talk by Yexiang Xue,
Title:
Scaling Up AI-driven Scientific Discovery via Embedding Physics Modeling into End-to-end Learning and Harnessing Locality Sensitive Hashing

Abstract: Learning first-principle physics models directly from experiment data has been a grand goal of Artificial Intelligence, and will greatly accelerate the pace of scientific discoveries if succeeded. Nevertheless, AI-driven scientific discovery in a closed loop has not been fully realized, because of major computational bottlenecks, particularly, the lack of end-to-end frameworks embedding large-scale simulation of physics models into learning, and the lack of efficient algorithms to accelerate not only the forward simulation, but the even more computational demanding backward gradient propagation. We address these key computational bottlenecks in AI-driven scientific discovery by proposing (1) an end-to-end framework to learn physics models in the form of Partial Differential Equations (PDEs) directly from the experiment data. In our framework, PDE solvers are formulated as fully differentiable neural network layers, allowing for seamless embedding of large-scale physics simulation into learning and efficient gradient propagation. We also propose to scale up the forward simulation and backward learning of first-principle models harnessing Locality Sensitive Hashing (LSH), taking advantage of the sparse updates of PDE models and the key insight that elements with similar neighbors share similar temporal dynamics. Our LSH-based approach groups elements with similar neighbors into a single hash bucket and perform one update per hash bucket. This reduces the time complexity to be proportional to the number of non-empty LSH hash buckets, order of magnitudes less than the number of operations required by the brute-force algorithm. Empiricial evaluations on learning nano-structure evolutions in materials under extreme conditions confirm the efficacy and efficiency of our algorithms.

Bio: Dr. Yexiang Xue is an assistant professor at Purdue University. The goal of Dr. Xue's research is to bridge large-scale constraint-based reasoning and optimization with state-of-the-art machine learning techniques in order to enable intelligent agents to make optimal decisions in high-dimensional and uncertain real-world applications. More specifically, Dr. Xue's research focuses on scalable and accurate probabilistic reasoning techniques, statistical modeling of data, and robust decision-making under uncertainty. Dr. Xue's work is motivated by key problems across multiple scientific domains, ranging from artificial intelligence, machine learning, renewable energy, materials science, crowdsourcing, citizen science, urban computing, ecology, to behavioral econometrics. Dr. Xue focuses on developing cross-cutting computational methods, with an emphasis in the areas of computational sustainability and scientific discovery.

4.40 PM to 5.40 PM

Panel Discussion: Evaluating Success of KGML Methods

Panelists: Lu Lu and Yexiang Xue

Panel Questions:

1. What are some general types of evaluation metrics we can use to evaluate the performance of KGML methods?

2. How can we standardize evaluation benchmarks in KGML (e.g., using same datasets, hyper-parameter settings, baseline methods) to ensure better reproducibility of results?

3. How can we evaluate the ability of KGML methods to generalize on out-of-distribution test samples? Are current benchmark datasets in KGML sufficient or do we need better benchmarks?

4. Can we create grand challenge problems/competitions in KGML to engage the broader community of researchers?

Day 3 (Nov 19, 2022)

9 AM to 9.40 AM

Invited Talk by Pedram Hassanzadeh,
Title:
Integrating the spectral analyses of neural networks and nonlinear physics for explainability, generalizability, and stability

Abstract: Turbulent flows, such as atmospheric and oceanic circulations, involve a variety of nonlinearly interacting physical processes spanning a broad range of spatial and temporal scales. To make simulations of these flows computationally tractable, e.g., for weather/climate prediction, processes with scales smaller than the typical grid size of numerical models have to be parAMeterized. Recently, there has been substantial interest (and progress) in using deep learning techniques to develop data-driven subgrid-scale (SGS) parAMeterizations. Another approach that is rapidly gaining popularity is to learn the entire spatio-temporal variability of a nonlinear dynAMical system from data, i.e., developing fully data-driven forecast models or emulators. For either of these approaches to be useful and reliable in practice, a number of major challenges have to be addressed. These include: 1) instabilities or unphysical drifts, 2) learning in the small-data regime by adding physics-constrained, 3) interpretability based on physics, and 4) extrapolation to different parAMeters. Using several setups of 2D turbulence, two-layer quasi-geostrophic turbulence, Rayleigh-Benard convection, and observation-derived ERA5 atmospheric reanalysis data, we introduce methods to address (1)-(4). The key aspect of some of these methods is combining the spectral analyses of deep neural networks and turbulence/nonlinear physics, as well as leveraging recent advances in theory and applications of deep learning. In the end, we will discuss scaling up these methods to more complex systems and real-world applications, e.g., SGS modeling of atmospheric gravity waves and conducting short- and long-term weather forecasting. This presentation covers several collaborative projects involving Ashesh Chattopadhyay (Rice U), Yifei Guan (Rice U), AdAM Subel (Rice U/NYU), and Laure Zanna (NYU).

Bio: Dr. Hassanzadeh received his B.S. from the University of Tehran (2005), M.S. from the University of Waterloo (2007), and Ph.D. from UC Berkeley (2013), all in Mechanical Engineering. He also holds a M.A. degree in Mathematics from UC Berkeley (2012). He was a Ziff Environmental Fellow at the Harvard University Center for the Environment (2013-2015) and a Postdoctoral Fellow at the Harvard University Department of Earth and Planetary Science (2015-2016). Dr. Hassanzadeh was also a Research Associate at the University of Waterloo (2007-2008), GFD Fellow at the Woods Hole Oceanographic Institution (2012), and Associate at Harvard University (2016). He joined the faculty at Rice in 2016. Dr. Hassanzadeh’s honors and awards include a CAREER Award from National Science Foundation (NSF), Young Investigator Award from the Office of Naval Research (ONR), Early-Career Research Fellowship from the National Academy of Sciences Gulf Research Program, Ziff Environmental Fellowship from the Harvard University Center for the Environment, NSERC Postgraduate Scholarship from the Natural Sciences and Engineering Research Council of Canada, Geophysical Fluid Dynamics Fellowship from the Woods Hole Oceanographic Institution, and Outstanding Preliminary Examination Award and Jonathan Laitone Memorial Scholarship from the Department of Mechanical Engineering of UC Berkeley.


9.40 AM to 10.34 AM

Contributed Paper Presentation Session 6

  • 9.40 AM - 9.58 AM: Kevin Menear, Dmitry Duplyakin, Madeleine Oliver, Munjal Shah, Michael Martin, Janna Martinek, Karthik Nithyanandam, and Zhiwen Ma, "One System, Many Models: Designing a Surrogate Model for Sulfur Thermal Energy Storage", (Paper Link)

  • 9.58 AM - 10.16 AM: Daniel O'Malley, Javier Santos, and Nicholas Lubbers, "Interlingual Automatic Differentiation: Software 2.0 between PyTorch and Julia", (Paper Link)

  • 10.16 AM - 10.34 AM: Christine Allen-Blanchette, Justice Mason, Naomi Leonard, Nicholas Zolman, and Elizabeth Davison, "Learning Interpretable Dynamics from Images of a Freely Rotating 3D Rigid Body", (Paper Link)

10.34 AM to 11 AM

Break

11 AM to 11.36 AM

Contributed Paper Presentation Session 7

  • 11 AM - 11.18 AM: Akshat Alok, Alisha Sharma, and Jason Geder, "A Practical, Machine Learning Based Approach for Distance Estimation Using the Implicitly Generated Noise of an Unmanned Air Vehicle", (Paper Link)

  • 11.18 AM - 11.36 AM: Brian Zhou, Jason Geder, Alisha Sharma, Julian Lee, Marius Pruessner, Ravi Ramamurti, and Kamal Viswanath, "Computational Approaches for Modeling Power Consumption on an Underwater Flapping Fin Propulsion System", (Paper Link)

11:36 AM to 11.45 AM

Concluding Remarks