Abstracts

Gianluca Iaccarino

Stanford University

Title: Experiments with Machine Learning in Fluids Applications

Applications of machine learning techniques to physics applications are gaining popularity by providing solution process acceleration, super-resolution, equation discovery, data compression, etc. While different flavors of data driven techniques have been proposed in the literature, in the first part of this talk, I will focus on hybrid approaches in which an existing grid-based numerical technique for the governing equations is augmented with a neural network correction. It is unclear if these approaches provide improvements over the baseline numerical solver by approximating local subgrid-scale dynamics, by incorporating long-range spatial/temporal correlations or by introducing high-order-like solution reconstructions. Furthermore, it is unclear how different aspects of the training pipeline affect the quality of the neural correction and its generalizability. We investigate the issues above using both the Burgers and the Navier-Stokes equations. In the second part of the talk, we focus on the use of convolutional autoencoder to study aerodynamic stall and focus on extreme latent space compression to extract physical insights. We show how physics imprinting (the correlation between latent variable and physical quantities) naturally emerges and is affected by the training database.

Daniele Schiavazzi

U. of Notre Dame

Title: Adaptive variational inference for physics-based models

Learning the parameters of physics-based models under uncertainty can present multiple challenges. Models can be computationally expensive or the posterior distribution can be geometrically complex (e.g. multimodal). To tackle these challenges, we propose a modular framework based on variational inference. NoFAS (Normalizing Flow with Adaptive Surrogate) couples normalizing flow with a neural network surrogate that is trained by alternating gradient updates computed from a memory dependent loss function. In addition, AdaAnn (Adaptive Annealing) adaptively determines an annealing schedule for the inverse temperature, which makes it easier to sample from complicated posterior distributions, while reducing the overall computational cost with respect to widely used linear schedulers.

Nicolas Garcia Trillos

U. of Wisconsin

Title: Analysis of adversial robustness and of other problems in modern machine learning

Modern machine learning methods, in particular deep learning approaches, have enjoyed unparalleled success in a variety of challenging application fields like image recognition, medical image reconstruction, and natural language processing. While a vast majority of previous research in machine learning mainly focused on constructing and understanding models with high predictive power, consensus has emerged that other properties like stability and robustness of models are of equal importance and in many applications essential. This has motivated researchers to investigate the problem of adversarial training (or how to make models robust to adversarial attacks), but despite the development of several computational strategies for adversarial training and some theoretical development in the broader distributionally robust optimization literature, there are still several theoretical questions about it that remain relatively unexplored. In this talk, I will take an analytical perspective on the adversarial robustness problem and explore three questions: 1)What is the connection between adversarial robustness and inverse problems?, 2) Can we use analytical tools to find lower bounds for adversarial robustness problems?, 3) How do we use modern tools from analysis and geometry to solve adversarial robustness problems? At its heart, this talk is an invitation to view adversarial machine learning through the lens of mathematical analysis, showcasing a variety of rich connections with perimeter minimization problems, optimal transport, mean field PDEs of interacting particle systems, and min-max games in spaces of measures. The talk is based on joint works with Leon Bungert (Bonn), Camilo A. García Trillos (UAL), Matt Jacobs (Purdue), Jakwang Kim (Wisc), and Ryan Murray (NCState).

Mateo Díaz

Califonia Institute of Technology

Title: Clustering a mixture of Gaussians with unknown covariance.

Clustering is a fundamental data scientific task with broad applications. This talk investigates a simple clustering problem with data from a mixture of Gaussians that share a common but unknown and potentially ill-conditioned covariance matrix. We consider Gaussian mixtures with two equally-sized components and derive a Max-Cut integer program based on maximum likelihood estimation. We show its solutions achieve the optimal misclassification rate when the number of samples grows linearly in the dimension, up to a logarithmic factor. However, solving the Max-cut problem appears to be computationally intractable. To overcome this, we develop an efficient spectral algorithm that attains the optimal rate but requires a quadratic sample size. Although this sample complexity is worse than that of the Max-cut problem, we conjecture that no polynomial-time method can perform better. Furthermore, we present numerical and theoretical evidence that supports the existence of a statistical-computational gap.

Paula Aguirre

UC Chile

Title: Combining earth observation data and hybrid machine learning methods for modelling of natural hazards: the case of landslides and wildfires

The exponentially increasing flow of data Earth observation offers huge potential for the use of machine learning in the study of complex problems in geosciences, and there are abundant examples of successful applications across all sub-domains, and from global to local scales. However, the nature of remote sensing data and of some of the phenomena under study present challenges related to integration of multi-sensor data with a wide range of resolutions and sensitivities, scarcity of labeled data, and adequate representation of the spatial and temporal context and uncertainties that dominate some physical processses. As in other scientific fields, one approach for advancing the performance, generalization and extrapolation capabilities of purely data-driven models of Earth systems is to blend deep learning methods with existing domain knowledge, materialized in the form of physical principles, symmetries, constraints, computational simulations, and parametric models, as examples.

This talk illustrates some of the limitations of traditional DL methods and novel hybrid modelling approaches that are currently being advanced in geosciences, focusing on two hazards that, due to the effects of climate change, represent an increasing risk to natural and built environments: landslides and wildfires. Improving the ability of AI tools to predict the probability of occurrence, intensity and propagation of these phenomena is crucial to manage their ecological and societal impacts, and presents opportunities for interdisciplinary research involving forestry, geology, and deep learning expertise.

Anastasios Matzavinos

UC Chile

Title: Data assimilation and uncertainty quantification in molecular dynamics

A recent approach to Bayesian uncertainty quantification using transitional Markov chain Monte Carlo (TMCMC) is extremely parallelizable and has opened the door to a variety of applications which were previously too computationally intensive to be practical. In this talk, we first explore the machinery required to understand and implement Bayesian uncertainty quantification using TMCMC. We then describe dissipative particle dynamics, a computational particle simulation method which is suitable for modeling extended biomolecular structures, and develop an example simulation of a lipid bilayer membrane in fluid. Finally, we apply the algorithm to a basic model of uncertainty in our lipid simulation, effectively recovering a target set of parameters (along with distributions corresponding to the uncertainty) and demonstrating the practicality of Bayesian uncertainty quantification for complex particle simulations.

Francisco Sahli

UC Chile

Title: Δ-PINNs: physics-informed neural networks on complex geometries

Physics-informed neural networks (PINNs) have demonstrated promise in solving forward and inverse problems involving partial differential equations. Despite recent progress on expanding the class of problems that can be tackled by PINNs, most of existing use-cases involve simple geometric domains. To date, there is no clear way to inform PINNs about the topology of the domain where the problem is being solved. In this work, we propose a novel positional encoding mechanism for PINNs based on the eigenfunctions of the Laplace-Beltrami operator. This technique allows to create an input space for the neural network that represents the geometry of a given object. We approximate the eigenfunctions as well as the operators involved in the partial differential equations with finite elements. We extensively test and compare the proposed methodology against traditional PINNs in complex shapes, such as a coil, a heat sink and a bunny, with different physics, such as the Eikonal equation and heat transfer. We also study the sensitivity of our method to the number of eigenfunctions used, as well as the discretization used for the eigenfunctions and the underlying operators. Our results show excellent agreement with the ground truth data in cases where traditional PINNs fail to produce a meaningful solution. We envision this new technique will expand the effectiveness of PINNs to more realistic applications.

Ignacio Muga

PUCV Chile

Title: Neural Control: improving the quality of finite-element solutions

We introduce the concept of neural control of discrete weak formulations of Partial Differential Equations (PDEs), in which finite element discretizations are intervened by using neural-network weight functions. The weight functions act as control variables that –through the minimization of a cost (or loss) functional– produce discrete solutions incorporating user-defined desirable attributes (e.g., known-data features, remotion of spurious oscillations, or precision at a certain quantities of interest). Well-posedness and convergence of the cost-minimization problem are analyzed. In particular, we prove under certain conditions, that the discrete weak forms are stable, and that quasi-minimizing neural controls exist, which converge quasi-optimally. We specialize our analysis into Galerkin, least-squares, and minimal-residual formulations. Elementary numerical experiments support our findings and demonstrate the potential of the framework.

Luis Martí

INRIA Chile

Title: : Advancing AI and ML for understanding the Ocean and Climate Change

Artificial Intelligence (AI) has a dual role with respect to climate change. On one hand, AI holds great promise to understand, mitigate, and adapt to climate change. However, on the other hand, it has been neglected by researchers and industry the ecological impact of AI itself. In this talk, I will describe the complexity of this duality. I will dive into the cohort approach, where our multi-disciplinary teams collaborate to bring the best of different scientific areas. I will do this by going over some interrelated projects that were are working on in Inria Chile.