TITLE: Augmentated prediction-correction PINN for estimating root zone soil moisture from data at shallower depths
ABSTRACT: The difficulty of placing of soil moisture sensors in the lower depth of the root zone poses a major limitation for both hydrological modeling and irrigation management. In this work, we introduce a predictor–corrector Physics-Informed Neural Network (PINN) framework designed to infer soil water content at 60 cm depth using only observations available at 30 cm. The predictor–corrector architecture allows for a physically consistent data augmentation strategy, in which additional informative samples are generated over a depth domain ranging from 0 to 60 cm, enabling the transition from Neumann to Dirichlet boundary conditions and the progressive improvement of predictions through successive training phases. The results indicate that the reconstructed soil moisture at 60 cm closely matches the observed dynamics, highlighting the capability of the proposed approach to recover soil moisture profiles even when direct measurements are unavailable. This methodology therefore represents a practical solution for extending observational datasets in real-world applications, particularly in situations where in situ sensors or satellite products are restricted to shallow soil layers.
TITLE: Neural Preconditioning for Parametrized 3D-1D Problems
ABSTRACT: Mixed-dimensional partial differential equations, such as 3D domains coupled with embedded 1D networks, arise in fractured media, vascular systems, fiber-reinforced materials, and other multiscale scenarios. When the embedded network varies in geometry or topology, traditional solvers and preconditioners lose efficiency, as spectral properties shift and parametrized simulations become increasingly costly.
This work introduces a new paradigm in which preconditioning is learned. A Neural Preconditioner (NeP) is developed as a nonlinear operator capable of adapting to geometric and topological variability, providing uniform and scalable convergence across the parameter space.
Two complementary unsupervised training regimes are presented: a static strategy that learns spectral transformations to improve operator conditioning, and a dynamic strategy that incorporates Krylov subspace geometry directly into the optimization process.
Numerical experiments demonstrate that the learned preconditioner substantially accelerates the GMRES solver for parametrized 3D–1D problems, maintaining stable performance across geometric variation.
Overall, the results highlight a unified computational framework where machine learning, iterative solvers, and operator theory come together to advance the numerical treatment of complex multiscale PDEs.
TITLE: Optimizing Liquidity Provision in Uniswap v3 via Physics-Informed Neural Networks
ABSTRACT: Decentralized Exchanges are rapidly changing financial markets by using blockchain technology to eliminate intermediaries. Among others, Uniswap v3 is the most prominent due to the Concentrated Liquidity mechanism, which allows liquidity providers to allocate capital within flexible price ranges, thus increasing possible revenues. This feature brings a key trade-off: narrower ranges increase both potential returns and the risk of inactive liquidity; wider ranges ensure continuous but lower profits. Thus, developing approaches for choosing the optimal liquidity provision range is becoming a predominant task both in the industry and academia. In this work, we propose a novel framework for optimizing liquidity provision in Uniswap v3 using Physics-Informed Neural Networks (PINNs). Our approach models market dynamics through stochastic processes and employs the Feynman-Kac theorem to compute the expected utility associated with the provision position as the solution of a Partial Differential Equation (PDE). This PDE is then solved using PINNs, enabling a fast approximation of expected utility. In such a way, it is possible to efficiently optimize the liquidity allocation in real-time with minimal computational cost. We assess our methodology through numerical experiments, where the backtesting results over eight pools demonstrate its effectiveness in optimizing liquidity provision performance. Thus, our results highlight the potential of the proposed framework for real-world applications.
TITLE: Constrained training of neural networks with applications to physics-inspired neural networks
ABSTRACT: There has been a considerable interest in constrained training of deep neural networks (DNNs). Crucially, several independent approaches to physics-inspired neural networks and neural solvers for differential equations yield stochastically constrained stochastic optimization problems (e.g., https://doi.org/10.1016/j.jcp.2020.109278,https://doi.org/10.1016/j.camwa.2023.12.016,https://arxiv.org/abs/2410.22796). Several algorithms have been proposed for this task, yet there are very few implementations. We present train (https://github.com/humancompatible/train), an easily extensible PyTorch-based Python package for optimization with stochastic constraints, which implements multiple previously unimplemented algorithms for stochastically constrained stochastic optimization.
TITLE: Numerical methods within Neural Networks framework for PDEs
ABSTRACT: Models based on Partial Differential Equations (PDEs) arise in a variety of applications,
including the life cycle of batteries, vegetation dynamics, material corrosion, renewable energy production. For their solution, in addition to standard numerical methods, recent approaches exploit Neural Networks (NNs), in particular Physics-Informed Neural Networks (PINNs) [1]. The latter are designed to approximate the solution of PDEs in both space and time by enforcing the governing equations. In this talk, divided into two parts, we focus on the use/integration of standard numerical methods within NNs frameworks.
In the first part we introduce new efficient W-methods for multidimensional PDEs, based on matrix-oriented and splitting techniques, discussing their accuracy, stability, and computational cost [2, 3]. We then present a case highlighting the importance of these efficient solvers: calibrating the 2D Klausmeier vegetation system [4] using satellite data, where NNs are trained on datasets generated by repeatedly solving the PDEs model for different parameters values.
In the second part, we show how standard numerical methods can be directly incorporated
into neural networks, leading to discrete-time PINNs [5]. In these NNs, the outputs approximate the numerical solution at each time step. Numerical tests show that these new discrete-time PINNs can compete with standard methods and other neural networks, while efficiently addressing inverse problems like parameter estimation.
Acknowledgements: this work has been supported by the PRIN PNRR 2022 projects
P20228C2PP “BAT-MEN” and P2022WC2ZZ “MatForPat”.
References
[1] M. Raissi, P. Perdikaris, G. E. Karniadakis. Physics-informed neural networks: A deep
learning framework for solving forward and inverse problems involving nonlinear partial
differential equations. J. Comput. Phys. 378, 686–707 (2019).
[2] D. Conte, S. Iscaro, G. Pagano, B. Paternoster. Matrix-oriented W-methods for 2D PDEs:
derivation and comparison with Approximate Matrix Factorization. In preparation.
[3] D. Conte, S. González-Pinto, D. Hernández-Abreu, G. Pagano. On Approximate Matrix
Factorization and TASE W-methods for the time integration of parabolic Partial Differential
Equations. J. Sci. Comput., 100, 34 (2024).
[4] A. Marasco, A. Iuorio, F. Cartení, G. Bonanomi, D. M. Tartakovsky, S. Mazzoleni, F. Giannino.
Vegetation Pattern Formation Due to Interactions Between Water Availability and Toxicity in
Plant–Soil Feedback. Bull. Math. Biol., 76, 2866–2883 (2014).
[5] C. Valentino, G. Pagano, D. Conte, B. Paternoster, F. Colace, M. Casillo. Step-by-step time
discrete Physics Informed Neural Networks with application to a sustainability PDE model.
Math. Comput. Simul., 230, 541–558 (2025)
TITLE: Approximation of SDEs with artificial neural networks
ABSTRACT: This talk presents a new methodology for approximating sample paths of stochastic differential equations (SDEs) using artificial neural networks. The method is based on a Doss–Sussman transformation of the initial SDE into a random ordinary differential equation (ODE). The
approach targets pathwise accuracy, highlights practical training strategies, and is demonstrated on benchmark SDE models. This is a joint work with Marcin Baranek (AGH University of Krakow, Faculty of Applied Mathematics).
References:
[1] M. Baranek, P. Przyby\l owicz, Deep learning approach for approximate solving of stochastic differential equations, in preparation
[2] K. Spendier, "On Numerical Methods for Stochastic Differential Equations", PhD Thesis, University of Klagenfurt, 2024,
https://netlibrary.aau.at/obvuklhs/content/titleinfo/10238204
[3] J. O’Leary, J. A. Paulson, A. Mesbah, Stochastic physics-informed neural ordinary differential equations, Journal of Computational Physics 468 (2022), 111466
[4] A. Neufeld, P. Schmocker, "Solving stochastic partial differential equations using neural networks in the Wiener chaos expansion",
https://arxiv.org/abs/2411.03384
[5] Y. Zhu, Y-H. Tang, C. Kim, "Learning stochastic dynamics with statistics-informed neural network", Journal of Computational Physics 474 (2023), 111819
TITLE: When PINNs meet Richards: Physics-Informed Neural Networks applied to inverse problems in soil science
ABSTRACT: Inverse problems for soil hydraulic parameter estimation are fundamental in vadose zone hydrology, yet remain challenging due to parameter non-identifiability, nonlinearity, and sparse observations. This work-in-progress study investigates inverse physics-informed neural networks (inverse PINNs) for parameter estimation in the 1D Richards equation with root water uptake (RWU) from sparse sensor data.
Our preliminary findings suggest that vanilla inverse PINNs struggle when the forward model contains non-smooth components (e.g., Feddes-type stress functions) and strong parameter–state decoupling. We are exploring a practical, fully differentiable stack: a Richards formulation that appears more PINN-friendly (Gardner constitutive relations and Kirchhoff transformation), and a mechanistic RWU model (Couvreur-type macroscopic sink terms) that is differentiable with respect to plant parameters.
TITLE: From the Digital Model to the Simulation: an Application of PINNs for Cultural Heritage Conservation
ABSTRACT: Conserving cultural heritage requires modeling complex and often time-dependent physical phenomena that govern the degradation and transformation of materials. This contribution presents a digital framework centered on Physics-Informed Neural Networks (PINNs) to enhance cultural asset analysis and predictive maintenance. PINNs provide a powerful mechanism for embedding the governing equations of physical systems directly into the learning process, enabling models that remain consistent with conservation laws even when data are sparse or heterogeneous.
Within the proposed architecture, PINNs are employed to analyse several phenomena, leveraging sensor data and digital replicas of heritage components to refine simulations and update model parameters.
To further increase efficiency, the framework couples PINNs with Reduced Order Models (ROMs). This hybrid strategy allows PINNs to focus on enforcing physical consistency and parameter estimation, while ROMs accelerate computations across large parameter spaces.
The talk discusses challenges in interacting with digital models, in sampling strategies, and in model robustness when combining data-driven and physics-driven components.
This contribution is based on joint work with F. Colace (DIIN, Unisa), D. Conte (DIPMAT, Unisa), F. Pichi (SISSA), and G. Rozza (SISSA).
TITLE: A brief introduction to matrix hydrodynamics
ABSTRACT: The aim of this talk is to give a basic presentation of matrix hydrodynamics; the field pioneered by V. Zeitlin, where 2-D incompressible fluids are spatially discretized via a suitable quantization theory. Even though we do not apply our results to a neural network, we believe that the rich geometric structure of matrix hydrodynamics might be very useful in the context of PINNs.