Monday January 19th 13:30-18:00
13:30-14:00 - Opening
14:00-14:30 - Elisa Francini (Universita di Firenze, IT), A nonlinear inverse problem with applications in cardiac electrophysiology
Abstract: Diffusion equation, coupled with an ordinary differential equation and regulates the behavour of the transmembrane potential in the heart tissue. We investigate the inverse problem of identifying perfectly insulating regions within the cardiac tissue that represent ischemic areas. We prove that the geometry and location of these insulating regions can be uniquely determined using only partial boundary measurements of the transmembrane potential.
14:30-15:00 - Giuseppe Floridia (Università di Napoli Federico II, IT), Inverse Problems for First-Order Hyperbolic Equations via Carleman Estimates
Abstract: In this talk, we discuss recent results on inverse problems for first-order hyperbolic equations obtained via Carleman estimates. In particular, we establish a new Carleman estimate that serves as the main analytical tool to address two types of inverse problems: the determination of an initial condition and the identification of a spatial factor in a source term. Using this framework, we derive Lipschitz stability estimates for both problems.
15:00-15:30 - Andrea Posilicano (Università dell'Insubria, IT), Inverse wave scattering in the time domain
Abstract:
15:30-16:00 - Alessandro Felisi (University of Bonn, DE), Stability transition for the inverse transport problem in the diffusive regime
Abstract: The inverse transport problem consists in retrieving the optical parameters of a medium, namely the absorption and the scattering coefficients, from measurements performed at the domain's boundary. In this talk, I will show how the relative magnitude of absorption and scattering influences the stability properties of the inverse problem. In particular, I will explain how the severe ill-posedness encountered in the highly-scattering, low-absorption regime is strictly connected to the diffusion approximation in radiative transfer. This is joint work with E. Demattè, Prof. A. Rüland and Prof. J.J.L. Velázquez.
16:00-16:30 - Coffee Break
16:30-17:00 - Silvia Gazzola (Università di Pisa, IT), Efficient gradient-based methods for bilevel learning via recycling Krylov subspaces
Abstract: Variational regularization methods always require a number hyperparameters, i.e., parameters that must be pre-specified in advance, such as regularization parameters. A data-driven approach to determine appropriate hyperparameter values is via a nested optimization framework known as bilevel learning. Even when it is possible to employ a gradient-based solver to the bilevel optimization problem, construction of the gradients, known as hypergradients, is computationally challenging, each one requiring the solution of both a minimization problem and a linear system. Since the latter do not change much during the iterations, we explore the application of recycling Krylov subspace methods, wherein information from one linear system solve is re-used to solve the next linear system. Specifically, we propose a novel recycling strategy based on a new concept, Ritz generalized singular vectors, which acknowledge the bilevel setting. Additionally, while existing iterative methods primarily terminate according to the residual norm, Ritz generalized singular vectors allow us to define a new stopping criterion that directly approximates the error of the associated hypergradient. The proposed approach is validated through extensive numerical testing in the context of inverse problems in imaging.
17:00-17:30 - Alessandro Benfenati (Università Statale di Milano, IT), Splitting PnP Approaches for Poisson Image Restoration
Abstract: PnPSplit+ is introduced as a Plug-and-Play method tailored for Poisson image restoration problems. By extending the PIDSplit+ algorithm within an ADMM framework, the approach provides a closed-form solution for the deblurring step, thus avoiding costly iterative solvers typically required for the Kullback–Leibler fidelity. Convergence is guaranteed through the use of a firmly nonexpansive denoiser, ensuring theoretical soundness. Extensive experiments on challenging restoration tasks demonstrate that PnPSplit+ achieves state-of-the-art performance, even under severe blur and high noise levels.
17:30-18:00 - Paolo Massa (University of Applied Sciences and Arts Northwestern Switzerland, CH), Risk-based approaches for Poisson image denoising with neural networks
Abstract: In this talk, we discuss the problem of removing Poisson noise from images using a Neural Network (NN). Inspired by Deep Image Prior, we consider an unsupervised approach, and we train a NN on a single noisy image to produce a denoised version of it. We propose to minimize the Kullback–Leibler divergence between the noisy image and the NN prediction. To prevent noise overfitting, we introduce a stopping rule for the NN training based on the Poisson Asymptotically Unbiased Kullback–Leibler (PAUKL) risk estimator. We prove that Multi-Layer Perceptrons (MLPs) with sigmoid activation functions satisfy the assumptions under which PAUKL is asymptotically unbiased. Moreover, we demonstrate that training a NN by directly minimizing the PAUKL estimator results in superior denoising performance. We conduct numerical experiments in both simulated and real-world scenarios and compare the results with those obtained from related risk-based approaches.
Tuesday January 20th 09:00-18:00
09:00-09:30 - Andrea Sebastiani (Università di Modena e Reggio Emilia, IT), Scalable Plug-and-Play Imaging through Block Proximal Heavy Ball
Abstract: In this talk we introduce a block proximal heavy ball method that makes Plug-and-Play (PnP) imaging scalable to resource-constrained settings. In particular, operating on image patches instead of full images dramatically reduces memory requirements while preserving theoretical convergence properties. Experiments on deblurring and super-resolution validate our approach, showing that scalability can be achieved without sacrificing (and occasionally enhancing) reconstruction quality.
09:30-10:00 - Emma Perracchione (Politecnico di Torino, IT), Greedy methods in the context of inverse problems
Abstract: Given a set of (possibly) indirect measurements sampled at multivariate scattered points, greedy schemes consist in considering all but a few data as a test set. The data left out are used as a training set to construct an initial rough model that is then refined by incrementally adding the more relevant sample, picked among the test set, according to some iterative rule. This last rule might be based on the residuals, meaning that the greedy scheme adds at each iteration the point at which the difference between the observed and approximated quantity is maximum, and hence the so constructed greedy scheme is totally task dependent, or it can rely on some a priori error indicators independent of the residual. The first class of schemes can be trivially adapted to the inverse framework by applying forward and backward operators to the discrete data, while to provide reliable bounds for error-based schemes, we restrict to specific imaging procedure, and precisely to interpolation and extrapolation algorithms. We will show that it is rather the basis of the projection space, in combination with the greedy method, that allows estimates and error bounds on the reconstruction. We numerically validate our theoretical claims by considering an application to solar hard X-ray imaging. Such telescopes, as the ESA telescope Spectrometer/Telescope for Imaging X-rays (STIX) on board Solar Orbiter provide observations (called visibilities) made of sampled Fourier components of the incoming photon flux released by solar flares.
10:00-10:30 - Andrea Frosini (Università di Firenze, IT), Reconstruction of convex polyominoes from projections
Abstract: One of the most interesting and challenging problems in Discrete Tomography concerns the faithful reconstruction of an unknown finite discrete set from its horizontal and vertical projections. The computational complexity of this problem has been considered and solved in case of horizontal and vertical convex polyominoes, by coding the possible solutions through a 2-SAT formula. On the other hand, the problem is still open in case of (full) convex polyominoes. As a matter of fact, the previous polynomial-time reconstruction strategy does not naturally generalize to them. In particular, it has been observed that the convexity constraint on polyominoes involves, in general, a k-SAT formula, preventing, up to now, the polynomiality of the entire process. Our studies focus on the clauses of such a formula and we show that they can be reduced to 2-SAT or 3-SAT only and that only a subset of the variables involved in the reconstruction may appear in the 3-SAT clauses, thus detecting some situations that lead to a polynomial time reconstruction.
10:30-11:00 - Coffee Break
11:00-11:30 - Sombuddha Bhattacharyya (Indian Institute of Science Education and Research, IN) , Light Ray Transform of Tensor Fields
Abstract: One of the most interesting and challenging problems in Discrete Tomography concerns the faithful reconstruction of an unknown finite discrete set from its horizontal and vertical projections. The computational complexity of this problem has been considered and solved in case of horizontal and vertical convex polyominoes, by coding the possible solutions through a 2-SAT formula. On the other hand, the problem is still open in case of (full) convex polyominoes. As a matter of fact, the previous polynomial-time reconstruction strategy does not naturally generalize to them. In particular, it has been observed that the convexity constraint on polyominoes involves, in general, a k-SAT formula, preventing, up to now, the polynomiality of the entire process. Our studies focus on the clauses of such a formula and we show that they can be reduced to 2-SAT or 3-SAT only and that only a subset of the variables involved in the reconstruction may appear in the 3-SAT clauses, thus detecting some situations that lead to a polynomial time reconstruction.
11:30-12:00 - Tapio Helin (LUT University, FI), Recent advances in passive gamma emission tomography
Abstract: Passive gamma emission tomography (PGET) is an imaging technique used to map the distribution of radioactive material inside a water tank by measuring the gamma rays it naturally emits, without adding any external radiation source. Detectors collect these emissions from many angles, and reconstruction algorithms convert the measurements into a 3D picture of where the radioactivity is located. In this work we discuss recent work on developing data-driven solvers for the reconstruction task.
12:00-12:30 - Wadim Gerner (Università di Genova, IT), Green hydrogen and inverse problems
Abstract: I present recent results regarding inverse problems in electrolyser cells. More specifically I will present results regarding the type of measurements which need to be performed in order to reconstruct the dependence of the electric potential in an electrolyser cells on the temperature and ion concentrations.
12:30-14:00 - Lunch Break & Poster Session
14:00-14:30 - Andreas Habring (Insitute of Visual Computing, AT), Inertial Langevin Sampling
Abstract: In this talk we present a new algorithm for sampling from Gibbs distributions of the form $p(x) \propto e^{-U(x)}$ for potentials $U:\mathbb{R}^d\rightarrow \mathbb{R}$. Inspired by Polyak's heavy ball method, we incorporate inertia/momentum into the famous unadjusted Langevin algorithm leading to the inertial Langevin algorithm. We prove ergodicity and convergence to the target distribution for strongly convex and $L$-smooth potentials and empirically confirm the efficiency and acceleration of the method even beyond these cases.
14:30-15:00 - Alessio Marta (Università Statale di Milano, IT), Estimating Dataset Dimension via Singular Metrics under the Manifold Hypothesis
Abstract: High-dimensional datasets often exhibit low-dimensional geometric structures, as suggested by the manifold hypothesis, which implies that data lie on a smooth manifold embedded in a higher-dimensional ambient space. While this insight underpins many advances in machine learning and inverse problems, fully leveraging it requires to deal with three key tasks: estimating the intrinsic dimension (ID) of the manifold, constructing appropriate local coordinates, and learning mappings between ambient and manifold spaces. In this work, we propose a framework that addresses all these challenges using a Mixture of Variational Autoencoders (VAEs) and tools from singular Riemannian geometry. We specifically focus on estimating the ID of datasets by analyzing the numerical rank of the VAE decoder pullback metric. The estimated ID then guides the construction of an atlas of local charts using a mixture of invertible VAEs, enabling accurate manifold parameterization and efficient inference. This approach can be used to enhance the solutions of ill-posed inverse problems, particularly in biomedical imaging, by enforcing that reconstructions lie on the learned manifold.
15:00-15:30 - Abhishake Rastogi (LUT University, FI), Gradient-Based Nonlinear Inverse Learning
Abstract: We investigate statistical inverse learning for nonlinear inverse problems under random design. In particular, we analyze gradient descent (GD) and stochastic gradient descent (SGD) with mini-batching, both employing constant step sizes. Our theoretical results establish convergence rates for these algorithms under standard a priori smoothness assumptions on the target function, formulated through the integral operator of the tangent kernel and bounds on the effective dimension. Furthermore, we determine stopping rules that guarantee minimax-optimal rates within the classical reproducing kernel Hilbert space (RKHS) framework. The findings highlight the effectiveness of GD and SGD in achieving optimal statistical accuracy for nonlinear inverse problems in random design settings.
15:30-16:00 - Coffee Break
16:00-16:30 - Botond Szabo (Università Bocconi, IT), Linear methods for non-linear inverse problems
Abstract: We consider the recovery of an unknown function $f$ from a noisy observation of the solution $u_f$ to a partial differential equation that can be written in the form $\mathcal{L} u_f=c(f,u_f)$, for a differential operator $\mathcal{L}$ that is rich enough to recover $f$ from $\mathcal{L} u_f$. Examples include the time-independent Schrodinger equation $\Delta u_f = 2u_f f$, the heat equation with absorption term $(\partial_t -\Delta_x/2) u_f=fu_f$, and the Darcy problem $\nabla\cdot (f \nabla u_f) = h$. We transform this problem into the linear inverse problem of recovering $\mathcal{L} u_f$ under the Dirichlet boundary condition, and show that Bayesian methods with priors placed either on $u_f$ or $\mathcal{L} u_f$ for this problem yield optimal recovery rates not only for $u_f$, but also for $f$. We also derive frequentist coverage guarantees for the corresponding Bayesian credible sets. Adaptive priors are shown to yield adaptive contraction rates for $f$, thus eliminating the need to know the smoothness of this function. The results are illustrated by numerical experiments on synthetic data sets.
16:30-17:00 - Gianpaolo Piscitelli (Università degli studi Napoli Parthenope, IT) , The p−Laplace “Signature” for Quasilinear Inverse Problems
Abstract: This presentation is focused on imaging problem encountered in the framework of Electrical Resistance Tomography involving two different materials, one or both of which are nonlinear. Tomography with nonlinear materials is in the early stages of developments, although breakthroughs are expected in the not-too-distant future. We consider nonlinear constitutive relationships which, at a given point in the space, present a behaviour for large arguments that is described by monomials of order p and q. The original contribution of this work makes is that the nonlinear problem can be approximated by a weighted p−Laplace problem. From the perspective of tomography, this is a significant result because it highlights the central role played by the p−Laplacian in inverse problems with nonlinear materials. Moreover, when p = 2, this provides a powerful bridge to bring all the imaging methods and algorithms developed for linear materials into the arena of problems with nonlinear materials. The main result are that for “large” or “small” Dirichlet data in the presence of two materials of different order (i) one material can be replaced by either a perfect electric conductor or a perfect electric insulator and (ii) the other material can be replaced by a material giving rise to a weighted p−Laplace problem.
17:00-17:30 - Matteo Fornoni (Università Statale di Milano, IT) , Optimal control of Cahn-Hilliard image inpainting models
Abstract: We consider an inpainting model proposed by A. Bertozzi et al., which is based on a Cahn-Hilliard-type equation, describing the evolution of an order parameter that approximates an image occupying a bounded two-dimensional domain. The given image is assumed to be damaged in a fixed subdomain, and the equation is characterised by a linear reaction term multiplied by the so-called fidelity coefficient, which is a strictly positive bounded function defined in the undamaged region. The idea is that, given an initial image, the order parameter evolves towards the given image and this process properly diffuses through the boundary of the damaged region, restoring the damaged image, provided that the fidelity coefficient is large enough. Here, we formulate an optimal control problem based on this fact, namely, our cost functional accounts for the magnitude of the fidelity coefficient. In this contribution, we first analyse the control-to-state operator and prove the existence of at least one optimal control, establishing the validity of first-order optimality conditions. Then, under suitable assumptions, we demonstrate second-order optimality conditions. This is a joint work with Elena Beretta, Cecilia Cavaterra and Maurizio Grasselli.
Wednesday January 21th 09:00-12:30
09:00-09:30 - Mahsa Yousefi (Università di Firenze, IT), Estimating Coefficient Functions in Inverse PDEs via PINNs
Abstract: We aim to estimate an unknown coefficient function in an inverse PDE problem using physics-informed neural networks (PINNs) enhanced by Fast Fourier transform (FFT). We present a two-phase algorithm and show some numerical results.
09:30-10:00 - Edoardo Centofanti (Università di Pavia, IT), Scientific Machine Learning Approaches to Cardiac Inverse Problems for Reconstructing Stimuli and Ischemia from Pseudo-ECG
Abstract: Inverse problems are essential in computational cardiology to infer hidden pathological or functional features from non-invasive data. We tackle the inverse reconstruction of ischemic regions and localization of external stimuli from pseudo-ECG signals, using the cardiac monodomain model as the physiological foundation. Latent Dynamics Networks (LD-Nets) serve as fast neural surrogates for the forward part of our architecture, mapping between ischemic or stimulus patterns and pseudo-ECGs, enabling efficient simulations in both 2D and 3D (ellipsoidal ventricular) geometries. This work demonstrates how combining deep learning with mechanistic models can accelerate simulations and enhance inverse reconstructions in clinically relevant contexts.
10:00-10:30 - Davide Bianchi (Sun Yat-sen University, CN), LIP-CAR: contrast agent reduction by a deep learned inverse problem
Abstract: The adoption of contrast agents in medical imaging protocols is crucial for accurate and timely diagnosis. While highly effective and characterized by an excellent safety profile, the use of contrast agents has its limitations, including rare risk of allergic reactions, potential environmental impact and economic burdens on patients and healthcare systems. In this work, we address the contrast agent reduction (CAR) problem, which involves reducing the administered dosage of contrast agent while preserving the visual enhancement. The current literature on the CAR task is based on deep learning techniques within a fully image processing framework. These techniques digitally simulate high-dose images from images acquired with a low dose of contrast agent. We investigate the feasibility of a ``learned inverse problem'' (LIP) approach, as opposed to the end-to-end paradigm in the state-of-the-art literature. Specifically, we learn the image-to-image operator that maps high-dose images to their corresponding low-dose counterparts, and we frame the CAR task as an inverse problem. We then solve this problem through a regularized optimization reformulation. Regularization methods are well-established mathematical techniques that offer robustness and explainability. Our approach combines these rigorous techniques with cutting-edge deep learning tools. Numerical experiments performed on pre-clinical medical images confirm the effectiveness of this strategy, showing improved stability and accuracy in the simulated high-dose images.
10:30-11:00 - Coffee Break
11:00-11:30 - Manuel Cañizares (RICAM, AT), Projection-based regularization for galaxy classification
Abstract: The orbits of stars in a galaxy can reveal information about its past. In particular, orbital structures give us information about previous merger processes between galaxies. Finding these structures from astronomical observations is an ill-posed problem, which amplifies measurement noise. However, orbital structures can be observed for simulated galaxies, in which individual stars can be tracked. We propose a projection-based method to use these simulated data as a learning set that allows us to regularize the problem.
11:30-12:00 - Simone Roncallo (Università di Pavia, IT), Quantum technologies for downsampling and pattern recognition
Abstract: Visual information can be manipulated in terms of images, usually captured and then processed through a sequence of computational operations. Alternatively, optical systems can perform such operations directly, reducing computational overhead at the cost of stricter design requirements. We discuss this workflow in the context of quantum technologies. First, we introduce a quantum algorithm that uses the quantum Fourier transform to discard the high spatial-frequency qubits of an image, downsampling it to a lower resolution. Our method allows us to capture, compress, and communicate visual information even with limited resources [1,2]. Then, we present a quantum optical pattern recognition method for binary classification tasks (in collaboration with MIT). Our method classifies patterns without reconstructing their images, encoding the spatial information of the object in the spectrum of a single photon, providing a superexponential speedup over classical methods [3,4].
References
[1] Simone Roncallo, Lorenzo Maccone and Chiara Macchiavello, Quantum JPEG, AVS Quantum Sci. 5, 043803 (2023)
[2] Emanuele Tumbiolo, Simone Roncallo, Chiara Macchiavello and Lorenzo Maccone, Quantum frequency resampling, npj Quantum Inf. 11, 123 (2025)
[3] Simone Roncallo, Angela Rosy Morgillo, Chiara Macchiavello, Lorenzo Maccone and Seth Lloyd, Quantum optical classifier with superexponential speedup, Commun. Phys. 8 147 (2025)
[4] Simone Roncallo, Angela Rosy Morgillo, Seth Lloyd, Chiara Macchiavello and Lorenzo Maccone, Quantum optical shallow networks, arXiv.2507.21036
12:00-12:30 - Luca Ratti (Università di Bologna, IT), Multi-frequency Electrical Impedance Tomography: variational and learned reconstruction methods
Abstract: Electrical Impedance Tomography (EIT) is a non-invasive imaging technique that uses boundary currents injected through surface electrodes to measure the electrical properties of biological tissues. Multi-frequency EIT (mfEIT) is an advanced extension of EIT that accounts for the frequency dependence of the conductivity coefficient, enabling more accurate modeling and better discrimination between tissues. The resulting non-linear and ill-posed inverse problem of mfEIT consists of identifying a space- and frequency-varying conductivity from suitable boundary measurements. Assuming the imaged object consists of (possibly overlapping) tissues with known conductivity spectra, the problem consists in reconstructing the fractional concentration of each tissue in each location (fraction model). In this talk, I will introduce two methods for fraction reconstruction in mfEIT at the interface of classical regularization and supervised learning techniques. The first one consists of a constrained non-smooth optimization problem, solved by an algorithm denoted as Fractional Proximal Regularized Gauss-Newton (FR-PRGN). FR-PRGN combines a second-order scheme to update the tissue fraction with a proximal step based on Entropic Mirror Descent, designed to enforce non-negativity and sum-to-one constraints on the fractions, which naturally leads to the application of a soft-max operator. The second one, instead, is based on the unfolding paradigm and leverages the expressive power of graph neural networks in place of proximal operators within FR-PRGN. The resulting network is trained on a supervised dataset of synthetic data and tested in challenging scenarios with multiple tissues, also allowing for possible overlaps and smooth transitions between different materials, which make our algorithms amenable for novel applications in tissue engineering.
This work is a collaboration with D. Lazzaro, S. Morigi, G.S. Alberti, and M. Santacesaria.