1st French-Italian workshop on the Mathematics of Imaging, Vision and their Applications (MIA-MIVA)

September 12-14, Laboratoire I3S, Sophia-Antipolis, France

Welcome to the website of the 1st joint Italian-French workshop Mathematics of Imaging, Vision and their Applications, which will be held at the Laboratoire I3S, Sophia-Antipolis, France on September 12-14 2022.

Topics

The purpose of this workshop is to foster collaborations between the French RT MIA and the Italian Gruppo UMI MIVA involving several researchers actively working in the rapidly evolving field of mathematical imaging, vision and their applications.

Participation & registration

The event will take place in a hybrid format.

  • Due to the limited capacity of the conference rooms, on-site participation is restricted to 50 participants.

  • A virtual streaming of the event will be arranged through Zoom connection.

Registration for on-site attendance is now closed. Please write an e-mail to calatroni[at]i3s[dot]unice[dot].fr if you would like to register for virtual attendance.

Venue & transports

The event will take place in the Conference Room of the laboratory I3S of Sophia-Antipolis.

  • The laboratory I3S can be reached from Antibes (Place de Gaulle/Dugommier) by bus 100 (closest stop: Templiers) or by the bus-tram A (closest stop: INRIA) and from Nice by bus 230 (closest stop: INRIA).

  • Several car parking spots are available outside the laboratory.

Visit the ENVIBUS website for planning your itinerary.

Program

We are hosting 6 invited speakers coming from both Italian and French institutions as well as selected contributed talks of young researchers from both communities.

Monday September 12

13:30-14:00: Welcome (slides with announcements)

14:00-14:45: invited talk by Andrés Almansa (CNRS, MAP5, FR)

Bayesian Imaging with Plug & Play Priors: implicit, explicit & unrolled cases

We consider inverse problems in imaging where the likelihood is known and the prior is encoded by a neural network. From a Bayesian perspective the solution of the inverse problem is described by the posterior distribution p(x|y) of the ideal image x given the observations y. We discuss different ways to maximise p(x|y) and to take samples from it.

The first part of the talk concentrates in the case where the plug & play prior p(x) is implicit in a pretrained neural network denoiser. We present the plug & play stochastic gradient descent (PnP-SGD) algorithm for posterior maximisation and the plug & play adjusted Langevin (PnP-ULA) algorithm for posterior sampling.

The second part of the talk shows how to repurpose a pretrained hierarchical VAE model as a prior for posterior sampling in the particular case of single image super-resolution.

The third part of the talk considers the case of posterior maximisation for super-resolution and deblurring with a spatially varying blur kernel. The plug and play ADMM algorithm would require to compute the proximal operator of a spatially varying blur kernel, which is computationally intractable. In order to avoid that we propose a linearized plug and play algorithm and an unrolled version thereof, which produces competitive results with respect to state of the art algorithms.


Joint work with: Rémi Laumont, Jean Prost, Charles Laroche, Valentin De Bortoli, Julie Delon, Antoine Houdard, Nicolas Papadakis, Marcelo Pereyra, Matias Tassano.

14:45-15:00: contributed talk by Vasiliki Stergiopoulou (UCA, FR)

From Optimisation to Learning in Super-Resolution Fluorescence Microscopy

In fluorescence microscopy, lateral resolution is limited due to the diffraction of visible light. To overcome this limitation, super-resolution fluorescence microscopy techniques have been proposed in the literature. One of them, COvariance-based l0 super-Resolution Microscopy with intensity Estimation (COL0RME), reconstructs a super-resolved image from a temporal stack of diffraction-limited images obtained by common fluorescence microscopes (e.g., widefield, TIRF). In this presentation, I will discuss an extension of the COL0RME method using an image denoiser (e.g., a pretrained network) as an imaging prior following the Plug-and-Play (PnP) reconstruction framework.


15:00-15:15: contributed talk by Silvia Sciutto (MaLGA, University of Genoa, IT)

Continuous Generative Neural Networks for Inverse Problems


We study Continuous Generative Neural Networks (CGNNs), namely, generative models in the continuous setting. The architecture is inspired by DCGAN, with one fully connected layer, several convolutional layers and nonlinear activation functions. In the continuous L^2 setting, the dimensions of the spaces of each layer are replaced by the scales of a multiresolution analysis of a compactly supported wavelet. We present conditions on the convolutional filters and on the nonlinearity that guarantee that a CGNN is injective. This theory finds applications to inverse problems, and allows for deriving Lipschitz stability estimates for (possibly nonlinear) infinite-dimensional inverse problems with unknowns belonging to the manifold generated by a CGNN.

15:15-15:30: contributed talk by Ngoc Long Nguyen (ENS Paris-Saclay, FR)

Self-Supervised Super-Resolution for Multi-Exposure Push-Frame Satellites

Modern Earth observation satellites capture multi-exposure bursts of push-frame images that can be super-resolved via computational means. In this work, we propose a super-resolution method for such multi-exposure sequences, a problem that has received very little attention in the literature. The proposed method can handle the signal-dependent noise in the inputs, process sequences of any length, and be robust to inaccuracies in the exposure times. Furthermore, it can be trained end-to-end with self-supervision, without requiring ground truth high resolution frames, which makes it especially suited to handle real data. Central to our method are three key contributions: i) a base-detail decomposition for handling errors in the exposure times, ii) a noise-level-aware feature encoding for improved fusion of frames with varying signal-to-noise ratio and iii) a permutation invariant fusion strategy by temporal pooling operators. We evaluate the proposed method on synthetic and real data and show that it outperforms by a significant margin existing single-exposure approaches that we adapted to the multi-exposure case.

15:30-16:00: Coffee break

16:00-16:45: invited talk by Alessandro Lanza (University of Bologna, IT)

Title: Unmasked and masked principles for automatic parameter selection in variational models for Poisson noise corruption.

Abstract: Effectiveness of variational methods for restoring images corrupted by Poisson noise strongly depends on a suitable selection of the regularization parameter balancing the effect of the regulation term(s) and the generalised Kullback-Liebler divergence data term. In this talk, we review, analyse and compare experimentally the parameter selection approaches proposed in the literature and some new ones, starting from the two classical approximated discrepancy principles by Zanella et al. adnd by Bardsley and Goldes to the very recent nearly exact dispcrepancy principle and Poisson whiteness principle by Bevilacqua et al. All these principles are unmasked, in the sense that all the pixels in the image are considered in the computations. The idea of masking the image by considering only pixels measuring a positive number of photons was proposed by Carlavan and Blanc-Féraud to deal effectively with the cases of dark background and/or low-count photons regime. However, the masked selection principles proposed are biased, in the sense that they do not adapt the underlying statistics. Hence, in this talk we also propose three novel unbiased masked selection principles which, on average, compare favourably with unmasked and masked biased principles.

16:45-17:00: contributed talk by Monica Pragliola (University of Naples, IT)

Residual whiteness principle for automatic parameter selection in imaging problems


We propose an automatic parameter selection strategy for some relevant imaging problems, such as image restoration and super-resolution, when the corrupting noise is known to be additive white Gaussian with unknown standard deviation. The proposed approach exploits the structure of the forward model operator in the frequency domain and computes the optimal regularization parameter as the one optimizing a suitably defined residual whiteness measure. After detailing the theoretical properties allowing to express the whiteness functional in a compact way, we show numerical experiments proving the effectiveness of the proposed approach for different type of problems, in comparison with well-known parameter selection strategies such as, e.g., the discrepancy principle.

Tuesday September 13

9:30-10:15: invited talk by Barbara Pascal (CNRS, FR)

Texture segmentation based on fractal attributes using convex functional minimization with generalized Stein formalism for automated regularization parameter selection

Texture segmentation still constitutes an ongoing challenge, especially when processing large-size real world images. The aim of this work is twofold. First, we provide a variational model for simultaneously extracting and regularizing local texture features, such as local regularity and local variance. For this purpose, a scale-free wavelet-based model, penalized by a Total Variation regularizer, is embedded into a convex optimisation framework. The resulting functional is shown to be strongly-convex, leading to a fast minimization scheme. Second, we investigate Stein-like strategies for the selection of regularization parameters. A generalized Stein estimator of the quadratic risk is built. Then it is minimized via a quasi-Newton algorithm relying on a proposed generalized estimator of the gradient of the risk with respect to hyperparameters, leading to an automated and data-driven tuning of regularization parameters. The overall procedure is illustrated on multiphasic flow images, analyzed as part of a long-term collaboration with physicists from the Laboratoire de Physique of ENS Lyon.

10:15-10:30: contributed talk by Stefano Aleotti (University of Insubria, IT)

Fractional graph Laplacian for image reconstruction


In a general Tykhonov settings we considered the fractional graph laplacian as a regularization term. We showed that in image deblurring and tomography problem, the fractional power allow us to achieve better reconstruction compared to the standard graph Laplacian.

10:30-10:45: contributed talk by Martin Huska (University of Bologna, IT)

Variational additive decomposition of images and signals into structure, harmonic and oscillatory components


In this talk we will discuss a nonconvex variational decomposition model which separates a given image into piecewise-constant, smooth, and oscillatory components. This decomposition is motivated not only by image denoising and structure separation, but also by shadow and spot light removal. The proposed model clearly separates the piecewise-constant structure and smoothly varying harmonic part, thanks to having a separated oscillatory component. The piecewise-constant part is captured by TV-like nonconvex regularization, harmonic term via second-order regularization, and oscillatory (noise and texture) term via a $H^{−1}$-norm penalty.

An efficient alternating direction method of multipliers-based minimization for fast numerical computation of the optimization problem is used. The model can be easily adapted to 1D signal decomposition.

10:45-11:00: contributed talk by Luca Ratti (MaLGA, University of Genova, IT)

Learning the optimal Tikhonov regularizer for linear inverse problems.


We consider a data-driven regularization strategy for linear inverse problems, such as denoising, deblurring, etc. We focus on the class of generalized Tikhonov regularizers and aim at learning the optimal one taking advantage of sample data. In an infinite-dimensional framework, we provide the expression for the optimal regularizer and define two strategies to approximate it, both in a supervised and in an unsupervised setting. We show theoretical bounds for the excess risk which are also verified by numerical experiments. This is a joint project with G. Alberti, E. De Vito, M. Lassas and M. Santacesaria

11:00-11:30: Coffee break

11:30-12:15: invited talk by Giovanni S. Alberti (MaLGA, University of Genoa, IT)

A few examples of instability in machine learning

Many of the breakthroughs of the last years in the applied sciences have been due to important advances in machine learning, and in particular in deep learning. However, these methods have been shown to be unstable and susceptible to adversarial perturbations and deformations: a small error in the input may yield a large error in the output. In this talk, after a brief review of these phenomena, I will discuss their implications to the use of machine learning methods in inverse problems and imaging.

12:15-14:00: Lunch break at the INRIA SAM canteen

14:00-14:45: invited talk by Elena Loli Piccolomini (University of Bologna, IT)

Few views CT image reconstruction: efficient numerical approaches for 3D applications

Combining healthy protocols with high quality images is one of the most important components of medical imaging and a crucial target for researchers involved in minimal invasive Computed Tomography (CT). A practical way to lower the radiation per person consists in reducing the number of X-ray projections (few-view CT), which leads to incomplete tomographic data, but very fast examinations. The image reconstruction process is an ill-posed inverse problem. The data incompleteness and the huge problem size in real 3D Cone Beam CT applications make the reconstruction very challenging. We propose different approaches, from model-based algorithms to new approaches using deep learning tools, showing results from simulated projections of 2D and 3D phantoms and real projections of 3D objects.

14:45-15:00: contributed talk by Margherita Scipione (University of Modena and Reggio Emilia, IT)

Deep Unfolding Network for few-view tomographic image reconstruction


A crucial goal of the CT is to obtain high-quality images using the least possible invasive acquisition processes. One possible strategy is to reduce the number of x-ray projections (few-view CT). Such a strategy is characterized by an ill-conditioned inverse problem with infinite solutions. A recent approach to solve this problem is through the Deep Unfoldig, where it aims to combine the advantages of the variational-based and deep learning approaches. In this paper a Deep Unfolding Network is proposed, that is obtained by unfolding a proximal interior point algorithm for the variational formulation of the few-view CT problem. The experiments performed on a synthetic dataset and on a realistic one confirm that the method is able to define competitive performances, compared to other methods, even in contexts where the presence of noise is significant. In particular, the results obtained have shown that the proposed architecture is particularly stable, since it allows to dampen the perturbations generated by noise levels, different from those used in the training phase.

15:00-15:15: contributed talk by Vanna Lisa Coli (UCA, FR)

Tomographic imaging: two examples of medical and archaeological applications

In this talk two applications of tomographic imaging will be presented. The first application will concern the detection and monitoring of brain strokes by means of microwave tomography, while the second application will deal with the analysis of Neolithic pottery tomographic images, with the aim of better understand the manufacturing technique and technical traditions of the time.

15:15-15:30: contributed talk by Nathanael Munier (ENS Rennes, FR)

Achieving the Cramér-Rao lower bound in single molecule localization microscopy


Single molecule localization microscopy (SMLM) -- a Nobel Prize-winning fluorescence microscopy technique -- allows to reach resolutions beyond Abbe’s limit. It consists in localizing isolated fluorescent molecules with a sub-pixel accuracy. In this talk, I will provide theoretical bounds that ensure the reliability of localization algorithms. In particular, I will show that the maximum likelihood estimator (MLE) attains the Cramér-Rao lower bound when the data suffers from additive Gaussian noise, conditionnally to an event that happens with high probability. The Cramèr-Rao bound is a fundamental limit on the achievable precision of any unbiased estimator. This result therefore shows the optimality of the MLE in this context, which was an open question.

15:30-15:45: contributed talk by Marta Lazzaretti (University of Genoa, IT & UCA, FR)

3D off-the-grid deconvolution in fluorescence microscopy


Fluorescence microscopy is a fundamental tool to investigate biological structures. However, acquisitions are affected by blurring arising from light diffraction, making the reconstruction of fine scale details challenging. Standard deconvolution methods are performed on the grid, meaning that the final reconstruction is an image in the discrete setting. When the imaged biological samples are very sparse it is often covenient to use off-the-grid reconstruction methods. In this scenario the aim is to reconstruct the number of spikes present in the sample, their intensities and positions, in the space of Radon measure with the Sliding Frank Wolfe Algorithm. We study off-the-grid deconvolution approaches for 3-dimensional acquisition and present some preliminary results.

15:45-17:00: RT MIA and Gruppo UMI MIVA meeting

Wednesday September 14

9:30-10:15: invited talk by Elie Bretin (INSA, Lyon, FR)

Mean curvature flow, neural networks, and applications

Many applications in image processing (denoising, segmentation), data science (point cloud smoothing, shape matching), material sciences (grain evolution in alloys, crystal growth) or biology (cell modeling) require the approximation of geometric interface evolution such as the emblematic mean curvature flow.

In this context, the phase field method is a particularly efficient tool to approximate the evolution of oriented surfaces, but things turn to be much more difficult for non-oriented surfaces. I will explain how interface evolutions can be approximated even in this case by training a neural network whose structure is derived from classical schemes associated with the Allen-Cahn equation. I will show applications of this new approach to the approximation of solutions to the Steiner and Plateau problems.

10:15-10:30: contributed talk by Quoc-Tung Le (ENS Lyon, FR)

See PDF.

10:30-10:45: contributed talk by Hippolyte Labarriére (INSA Toulouse, FR)

FISTA restart using an automatic estimation of the growth parameter

We introduce a restart scheme for FISTA (Fast Iterative Shrinking-Threshold Algorithm) adapted to functions with sharp geometry. This method which is a generalization of Nesterov's accelerated gradient algorithm is widely used in the field of large convex optimization problems and it provides fast convergence results under a strong convexity assumption. These convergence rates can be extended for weaker hypotheses such as the \L{}ojasiewicz property but it requires prior knowledge on the function of interest. In particular, most of the schemes providing a fast convergence for non-strongly convex functions satisfying a quadratic growth condition involve the growth parameter which is generally not known. Recent works show that restarting FISTA could ensure a fast convergence for this class of functions without requiring any knowledge on the growth parameter. We improve these restart schemes by providing a better asymptotical convergence rate and by requiring a lower computation cost. We present numerical results emphasizing the efficiency of this method.

10:45-11:00: contributed talk by Danilo Pezzi (Università of Modena and Reggio Emilia, IT)

Explainable Bilevel Optimization: an Application to the Helsinki Deblur Challenge


In this talk a bilevel optimization scheme for solving a general image deblurring problem is presented. With the use of machine learning tools, a variational approach with automatically learned parameters is able to achieve a high quality reconstructed image, while maintaining the theoretical background and interpretability of variational models. The features of the bilevel scheme are tailored for the Helsinki Deblur Challenge 2021, which tasked the participants with restoring out-of-focus text images. In the lower level problem, a fixed number of FISTA iterations is applied to an edge preserving energy functional. On the other hand, the parameters of the model are learned either through a similarity index or a support vector machine strategy.

11:00-11:30: Coffee break

11:30-11:45: contributed talk by Cesare Molinari (MaLGA, University of Genoa, IT)

Zeroth order optimization with orthogonal random directions

In this talk, we propose and analyze a randomized zeroth-order approach based on approximating the exact gradient by finite differences computed in a set of orthogonal random directions that changes with each iteration. A number of previously proposed methods are recovered as special cases including spherical smoothing, coordinate descent, as well as discretized gradient descent. Our main contribution is proving convergence guarantees as well as convergence rates under different parameter choices and assumptions. In particular, we consider convex objectives, but also possibly non-convex objectives satisfying the Polyak-Łojasiewicz (PL) condition. Theoretical results are complemented and illustrated by numerical experiments.

This is joint work with C. M., David Kozak, Lorenzo Rosasco, Luis Tenorio and Silvia Villa.

11:45-12:00: Closing remarks and greetings

12:00-13:30: Lunch break at the INRIA SAM canteen

Organisation

Financial support

This event is funded by the RT MIA and by the I3S laboratory, Sophia-Antipolis, France.

Contacts

For information/questions contact calatroni'at'i3s'dot'unice'dot'fr.