Heriot-Watt University

Edinburgh EH14 4ASColin Maclaurin Building CMT.19A
a.repetti@hw.ac.uk
Associate Professor
Department of Actuarial Mathematics and Statistics School of Mathematical and Computer Sciences
Institute of Sensors, Signals, and Systems School of Engineering and Physical Sciences
Maxwell Institute for Mathematical Sciences - Edinburgh 

Python codes

Primal-dual plug-and-play for computational optical imaging with a photonic lantern

Optical fibres aim to image in-vivo biological processes. In this context, high spatial resolution and stability to fibre movements are key to enable decision-making processes (e.g., for microendoscopy). Recently, a single-pixel imaging technique based on a multicore fibre photonic lantern has been designed, named computational optical imaging using a lantern (COIL). A proximal algorithm based on a sparsity prior, dubbed SARA-COIL, has been further proposed to enable image reconstructions for high resolution COIL microendoscopy. In this work, we develop a data-driven approach for COIL. We replace the sparsity prior in the proximal algorithm by a learned denoiser, leading to a plug-and-play (PnP) algorithm. We use recent results in learning theory to train a network with desirable Lipschitz properties. We show that the resulting primal-dual PnP algorithm converges to a solution to a monotone inclusion problem. Our simulations highlight that the proposed data-driven approach improves the reconstruction quality over variational SARA-COIL method on both simulated and real data.
Related articles

Uncertainty Quantification in CT pulmonary angiography

Computed tomography (CT) imaging of the thorax is widely used for the detection and monitoring of pulmonary embolism (PE). However, CT images can contain artifacts due to the acquisition or the processes involved in image reconstruction. Radiologists often have to distinguish between such artifacts and actual PEs. Our main contribution comes in the form of a scalable hypothesis testing method for CT, to enable quantifying uncertainty of possible PEs. In particular, we introduce a Bayesian Framework to quantify the uncertainty of an observed compact structure that can be identified as a PE. We assess the ability of the method to operate under high noise environments and with insufficient data.
Related article

A. M. Rambojun, H. Komber, J. Rossdale, J. Suntharalingam, J. C. L. Rodrigues, M. J. Ehrhardt, and A. Repetti, Uncertainty Quantification in CT pulmonary angiography, Accepted for publication in PNAS Nexus, Oct. 2023. [pdf]


Distributed Block-Split Gibbs (DSGS) sampler for image restoration

Sampling-based algorithms are classical approaches to perform Bayesian inference in inverse problems. They provide estimators with the associated credibility intervals to quantify the uncertainty on the estimators. Although these methods hardly scale to high dimensional problems, they have recently been paired with optimization techniques, such as proximal and splitting approaches, to address this issue. Such approaches pave the way to distributed samplers, splitting computations to make inference more scalable and faster. We introduce a distributed Split Gibbs sampler (SGS) to efficiently solve such problems involving distributions with multiple smooth and non-smooth functions composed with linear operators. The proposed approach leverages a recent approximate augmentation technique reminiscent of primal-dual optimization methods. It is further combined with a block-coordinate approach to split the primal and dual variables into blocks, leading to a distributed block-coordinate SGS. The resulting algorithm exploits the hypergraph structure of the involved linear operators to efficiently distribute the variables over multiple workers under controlled communication costs. It accommodates several distributed architectures, such as the Single Program Multiple Data and client-server architectures. Experiments on a large image deblurring problem show the performance of the proposed approach to produce high quality estimates with credibility intervals in a small amount of time. 
Related articles

Learning maximally monotone operators

We introduce a new paradigm for solving regularized variational problems. These are typically formulated to address ill-posed inverse problems encountered in signal and image processing. The objective function is traditionally defined by adding a regularization function to a data fit term, which is subsequently minimized by using iterative optimization algorithms. Recently, several works have proposed to replace the operator related to the regularization by a more sophisticated denoiser. These approaches, known as plug-and-play (PnP) methods, have shown excellent performance. Although it has been noticed that, under some Lipschitz properties on the denoisers, the convergence of the resulting algorithm is guaranteed, little is known about characterizing the asymptotically delivered solution. In the current article, we propose to address this limitation. More specifically, instead of employing a functional regularization, we perform an operator regularization, where a maximally monotone operator (MMO) is learned in a supervised manner. This formulation is flexible as it allows the solution to be characterized through a broad range of variational inequalities, and it includes convex regularizations as special cases. From an algorithmic standpoint, the proposed approach consists in replacing the resolvent of the MMO by a neural network (NN). We present a universal approximation theorem proving that nonexpansive NNs are suitable models for the resolvent of a wide class of MMOs. The proposed approach thus provides a sound theoretical framework for analyzing the asymptotic behavior of first-order PnP algorithms. In addition, we propose a numerical strategy to train NNs corresponding to resolvents of MMOs. We apply our approach to image restoration problems and demonstrate its validity in terms of both convergence and quality.
Related articles

Minicourse Proximage @SMAI-MODE June 2022

This course has been created for “Journées SMAI-MODE 2022, Limoges”
Image processing aims to extract or interpret the information contained in the observed data linked to one (or more) image(s). Most of the analysis tools are based on the formulation of an objective function and the development of suitable optimization methods. This class of approaches, qualified as variational, has become the state-of-the-art for many image processing modalities, thanks to their ability to deal with large-scale problems, their versatility allowing them to be adapted to different contexts, as well as the associated theoretical results ensuring convergence towards a solution of the finite objective function.
Slides of the course are available here
Python notebook

1- Play with direct model - Notebook

2- Image deconvolution considering Forward-Backward algorithm, FISTA and Condat-Vu algorithm - Notebook

3- Image denoising with Plug-and-Play Forward-Backward - Notebook