Dispersive Models and Deep Neural Networks
Dispersive Models and Deep Neural Networks refer to the study of partial differential equations whose solutions spread or disperse over time, and their interaction with modern machine learning methods.
On the one hand, dispersive models capture essential phenomena in physics and mathematics, such as wave propagation and stability. On the other hand, deep neural networks provide flexible tools to approximate, simulate, and even discover hidden structures within these models.
The combination of both fields aims to develop a rigorous mathematical framework in which analytical properties of dispersive equations can be enhanced or complemented by the computational power of neural architectures.
Within this framework, the group seeks to provide a rigorous setting for the simulation of the most well-known dispersive models such as nonlinear waves, Schrödinger and Korteweg-de Vries.
Operator Nets and the Calderón's problem
This line refers to the use of operator learning architectures, often called operator neural networks, to study and approximate solutions to the inverse conductivity problem posed by Calderón.
The Calderón problem asks whether one can determine the internal conductivity of a medium from boundary measurements, a central question in inverse problems and medical imaging.
Operator networks provide a framework to learn mappings between infinite-dimensional function spaces, making them well suited to approximate the nonlinear relation between boundary data and interior properties.
This approach bridges rigorous mathematical analysis with modern computational methods, opening new perspectives on one of the most influential inverse problems in mathematics.
Approximation of Nonlocal PDEs
The study of nonlocal partial differential equations and deep neural networks concerns the analysis of PDEs where the evolution at a point depends not only on local values and derivatives, but also on interactions across distant regions, often modeled through integral operators or fractional derivatives.
These equations arise naturally in physics, biology, and finance, where long-range effects play a fundamental role. Deep neural networks provide a complementary framework to approximate solutions, learn hidden operators, and accelerate simulations of such nonlocal dynamics.
By combining rigorous mathematical analysis with modern machine learning, this area aims to better understand the qualitative behavior of nonlocal PDEs and to develop efficient computational methods capable of addressing problems that are otherwise intractable.
Understanding stellar atmospheric models using DNN methods
This research line refers to the application of deep neural networks to analyze, approximate, and interpret the complex physical processes that govern the structure and behavior of stellar atmospheres.
These models, traditionally built on detailed radiative transfer and fluid dynamics, can be computationally demanding. By employing deep learning techniques, one seeks to capture essential patterns, accelerate simulations, and provide new insights into the dynamics and observable features of stars, while maintaining consistency with the underlying physics.
The group has been working with an intersdisciplinary team to provide Physics Informed Neural Networks capable to emulate and extend stellar atmospheric models to the level where standard methods are not able to reach.