On the Convergence of Data Assisted Regularization Method for Solving Nonlinear Inverse Problems
Organized by: Math4DL, University of Bath, Bath, United Kingdom.
Abstract:
On the Convergence of Iteratively Regularized Stochastic Gradient Descent for Solving Nonlinear Inverse Problems
Organized by: Annual Inverse Meet, IIT Gandhinagar.
Abstract: In this talk, we will discuss a novel variant of the stochastic gradient descent (SGD) method, termed iteratively regularized stochastic gradient descent (IRSGD), designed to address nonlinear ill-posed problems in Hilbert spaces. Under standard assumptions, we establish that the mean square iteration error of the proposed method converges to zero in the absence of noise in the data.
When dealing with noisy data, we introduce a Heuristic Parameter Choice Rule (HPCR) inspired by the approach of Hanke and Raus. This rule facilitates the selection of the regularization parameter without requiring prior knowledge of the noise level. We demonstrate that the IRSGD method, combined with HPCR, terminates in finitely many steps in the presence of noisy data while retaining its regularization properties.
If time permits, we also derive convergence rates for the IRSGD method under well-established source conditions and related assumptions. We relax the conventional assumption of polynomially decaying step sizes, which has been a key condition in previous analyses of SGD methods.
Finally, we showcase the practical efficacy and robustness of the proposed method by performing numerical experiments on first-kind linear integral equations and mathematical models based on Schlieren tomography.
Stochastic Data-Driven Bouligand Landweber Method for Non-smooth Inverse Problems
Organized by: Department of Mathematics, IIT Roorkee.
Abstract: In this talk, we present and analyze a novel variant of the stochastic gradient descent method, referred to as Stochastic data-driven Bouligand–Landweber iteration tailored for addressing the system of non-smooth ill-posed inverse problems. Our method incorporates the utilization of training data, using a bounded linear operator, which guides the iterative procedure. At each iteration step, the method randomly chooses one equation from the nonlinear system with a data-driven term. When dealing with precise data, it has been established that the mean square iteration error converges to zero. However, when confronted with noisy data, we employ our approach in conjunction with a predefined stopping criterion, which we refer to as an a priori stopping rule. We provide a comprehensive theoretical foundation, establishing convergence and stability for this scheme within the realm of infinite-dimensional Hilbert spaces.
These theoretical underpinnings are further bolstered by a numerical experiment on a system of linearly ill-posed problems and by discussing an example that fulfills the assumptions of the paper.
Deep Learning in Scientific Computing (March 24 - March 28, 2025)
Jointly Organized by: CMFC and CMLBDA, LNMIIT, Jaipur. Supported by: Indian Society for Mathematical Modelling and Computer Simulation (ISMMACS).
Control Theory for Partial Differential Equation (Dec. 04 – Dec. 16, 2023)
Organized by: Department of Mathematics, IISER, Thiruvananthapuram. Supported by: NCMW-ATM School.
Mathematics for Health Sciences ( Dec. 28, 2023 – 06 Jan. 2024)
Organized by: Department of Mathematics, BITS Pilani, Pilani Campus. Supported by: CIMPA School.