Claire Boyer (Sorbonne Université, Paris, France)
Abstract: Physics-informed neural networks (PINNs) combine the expressiveness of neural networks with the interpretability of physical modeling. Their good practical performance has been demonstrated both in the context of solving partial differential equations and more generally in the context of hybrid modeling, which consists of combining an imperfect physical model with noisy observations. However, most of their theoretical properties remain to be established. We offer some food for thought and statistical insight into the proper use of PINNs.
Estelle Massart (UCLouvain, Belgium)
Abstract: We address global high-dimensional optimization problems in which the objective is mostly varying along a low-dimensional subspace of the search space. These problems appear, e.g., in complex engineering and physical simulation/inverse problems. We propose a random-subspace algorithmic framework (referred to as X-REGO) that randomly projects, in a sequential or simultaneous manner, the high-dimensional original problem into low-dimensional subproblems that can then be solved with any global, or even local, optimization solver. For Lipschitz-continuous objectives, we analyse its convergence using novel tools from probability theory as well as conic integral geometry; our analysis relies on an estimation of the probability that the randomly-embedded subproblem shares (approximately) the same global optimum as the original problem. This success probability is then used to show almost sure convergence of X-REGO to an approximate global solution of the original problem, under weak assumptions on the problem (having a strictly feasible global solution) and on the solver (guaranteed to find an approximate global solution of the reduced problem with sufficiently high probability).
This is joint work with C. Cartis (University of Oxford) and A. Otemissov (Nazarbayev University).
Luca Ratti (Università di Genova, Italy)
Abstract: Sparsity promotion is a popular regularization technique for inverse problems, reflecting the prior knowledge that the exact solution is expected to have few non-vanishing components, e.g. in a suitable wavelet basis. In this talk, I will present a convolutional neural network designed for sparsity-promoting regularization for linear inverse problems. The key idea motivating the proposed architecture is to unroll the Iterative Soft Thresholding Algorithm (ISTA) for sparsity promotion, introducing a learnable correction on the forward operator. By employing a multiresolution wavelet representation of the signals, we can represent the learned correction as a (suitably designed) convolutional layer, and by microlocal analysis, we can interpret it as a pseudodifferential operator, motivating the name of our novel architecture: PsiDONet. I will discuss the main theoretical results associated with the resulting algorithm, as well as some numerical examples in the main considered case study, namely, limited-angle computed tomography.
Finally, I will describe some recent extensions of the project, both in terms of a more efficient parametrization of the architecture and of a more general class of considered regularization functionals.
This is a joint project with T. A. Bubba (University of Bath), M. Lassas, S. Siltanen (University of Helsinki), and M. Galinier, M. Prato (Università di Modena).