Mauro Maggioni
Johns Hopkins University, U.S.A.
Johns Hopkins University, U.S.A.
Bio: Mauro Maggioni works at the intersection between harmonic analysis, approximation theory, high-dimensional probability, statistical and machine learning, model reduction, stochastic dynamical systems, and statistical signal processing. He received a Ph.D. in Mathematics from the Washington University, St. Louis, in 2002; after begin a Gibbs Assistant Professor in Mathematics at Yale University for 4 years, he moved to Duke University, becoming Professor in Mathematics, Electrical and Computer Engineering, and Computer Science in 2013. He is currently a Bloomberg Distinguished Professor of Mathematics, and Applied Mathematics and Statistics at Johns Hopkins University. He received the Popov Prize in Approximation Theory in 2007, a N.S.F. CAREER award and Sloan Fellowship in 2008, an inaugural Fellowship of the American Mathematical Society in 2013, and was a Simons Research Fellow in 2021.
KEYNOTE
Learning interaction laws in particles systems, and digital twins in cardiac electrophysiology
Abstract: I will discuss recent results in two research directions in scientific machine learning.
Firstly, we consider systems of interacting agents or particles, which are commonly used for modeling across the sciences. While these systems have very high-dimensional state spaces, the laws of interaction between the agents may be quite simple, for example they may depend only on pairwise interactions, and only on pairwise distance in each interaction. We consider the inference problem of learning the interaction laws, given only observed trajectories of the agents in the system. We would like to solve this without assuming any particular form for the interaction laws, i.e. they might be “any” function of pairwise distances, or other variables, on Euclidean spaces, manifolds, or networks. We consider this problem in the case of a finite number of agents, with observations along an increasing number of paths. We cast this as an inverse problem, discuss when this problem is well-posed, construct estimators for the interaction kernels with provably good statistically and computational properties. We discuss (1) the fundamental role of the geometry of the underlying space, in the cases of Euclidean space, manifolds, and networks, even in the case when the network is unknown; and (2) extensions to second-order systems, more general interaction kernels, and stochastic systems. This is joint work with Q. Lang (Duke), F. Lu (JHU), S. Tang (UCSB), X. Wang (JHU) , M.Zhong (UH).
In the second part of the talk, I will discuss recent applications of deep learning in the context digital twins in cardiology, and in particular the use of operator learning architectures for predicting solutions of parametric PDEs, or functionals thereof, on a family of diffeomorphic domains, which we apply to the prediction of medically-relevant electrophysiological features of heart digital twins. This is joint work with M. Yin (JHU), N. Charon (UH), R. Brody (JHU), L. Lu (Yale), N. Trayanova (JHU).