Imaging & inverse problems (IMAGINE) OneWorld seminars

A SIAM-IS virtual seminar series

The current COVID-19 global situation is making traveling impossible and causing the cancellation of conferences and seminars all around the world. Inspired by the idea of the Probability, PDE, MINDS and MADS One World seminars, our One World IMAGing and INvErse problems (IMAGINE) seminar series aims to provide a forum for exchange of ideas and networking for scientists world-wide working in the field of mathematical imaging and applied inverse problems.


From October 2020, the seminar series is labeled by the SIAM activity group on Imaging Science (SIAG IS). Visit this page if you want more information for joining SIAM.



IMAGINE topics

Talks of this seminar series will focus on the mathematical modelling, analysis and computational aspects of image processing and applied inverse problems together with their application to real-world problems.

Dates, times and format

  • IMAGINE seminars will run on Wednesdays.

  • Seminars will start at 10am US Eastern time, click here for conversion in your local time.

  • Seminars will take the format of Zoom Webinars.

  • Seminars will be 45 minute long, with 15 minutes for questions moderated by the hosts of the call.

Mailing list

Please subscribe to the SIAM-IS IMAGINE virtual series by filling this form.

Attending the webinar

We will use the Zoom Webinar platform. Prior to the beginning of the seminar, a Zoom link with password will be sent to the e-mail addresses of the people who have registered to be included in the mailing list. As participant, your audio and video will be muted. You may ask questions at the end of the talk by clicking on the 'Raise your hand' button. You will be then unmuted and allowed to talk with the presenter.

Upcoming talks / January - March 2021 (3rd season)

All seminars will start at 10am US Eastern time.

They will be available also on the IMAGINE SIAM AG IS YouTube channel .

The complete list of previous speakers, their YouTube recordings and their slides are available here (1st season) and here (2nd season).

WEDNESDAY JANUARY 20 - Coloma Ballester (UPF, ES)

Title: Adversarial strategies for out of distribution and for some inverse problems in imaging

Abstract: In this talk, some adversarial approaches in imaging will be discussed. First, a method for anomaly detection will be described. It is based on the unsupervised learning of the probability distribution of normal data through a GAN learning strategy and on a new anomaly detector that leverages a recorded history of the normal data generator. It results in an efficient and general anomaly detector that is free of assumptions on the data and thus it can be applied to any data modality. Then, some adversarial approaches for image restoration problems such as inpainting and colorization will be described.

Video

WEDNESDAY JANUARY 27 - Michael Ng (HKU, HK)

Title: Low Rank Tensor Completion and its Applications

Abstract: In this talk, we study low rank tensor completion problems. The key issue is how to represent a low rank structure embedded in the underlying data. Several models and methods including transformation, dictionary, nonlocal similarity, factorization and regularization are discussed. Both theoretical and numerical results are presented to illustrate the proposed methods.

Video

WEDNESDAY FEBRUARY 3 - Laurent Demanet (MIT, US)

Title: Imaging from deepfake data

Abstract: Neural networks might have an interesting and surprising role to play in the context of imaging/inversion from sensor data and physical models. They can sometimes generate helpful virtual “deepfake” data that weren’t originally recorded, but which extend the reach of inversion in a variety of ways. I will discuss three examples in the context of seismic imaging: bandwidth extension, data from virtual sources, and “physics swap”. Joint work with Hongyu Sun, Pawan Bharadwaj, Matt Li, and Leonardo Zepeda-Nunez.

Video

WEDNESDAY FEBRUARY 10 - Ji Hui (NUS, SGP)

Title: Self-supervised deep learning for image recovery

Abstract: In last few years, deep learning has rapidly become a prominent tool for solving many challenging problems in image recovery. While most existing methods are supervised over a dataset of many degraded/truth image pairs to learn how to predict a truth image from its degraded counterpart, there is an increasing interest on studying powerful deep learning methods for image recovery that is dataset-free. In this talk, we will introduce a general self-supervised deep learning method for image recovery that does not have any prerequisite on training samples. The key parts include random-sampling-based data augmentation technique and Bayesian-deep-network-based approximate minimum mean square error (MMSE) estimator. Extensive experiments showed that our dataset-free deep learning method can compete well against existing supervised-learning-based solutions to many image recovery problems, e.g., image denoising, image deconvolution, and compressed sensing.

Video

WEDNESDAY FEBRUARY 17 - Anders Hansen (University of Cambridge, UK)

Title: On the foundations of computational mathematics, Smale’s 18th problem and the potential limits of AI

Abstract: There is a profound optimism on the impact of deep learning (DL) and AI in the sciences with Geoffrey Hinton concluding that 'They should stop training radiologists now'. However, DL has an Achilles heel: it is universaly unstable so that small changes in the initial data can lead to large errors in the final result. This has been documented in a wide variety of applications. Paradoxically, the existence of stable neural networks for these applications is guaranteed by the celebrated Universal Approximation Theorem, however, the stable neural networks are never computed by the current training approaches. We will address this problem and the potential limitations of AI from a foundations point of view. Indeed, the current situation in AI is comparable to the situation in mathematics in the early 20th century, when David Hilbert’s optimism (typically reflected in his 10th problem) suggested no limitations to what mathematics could prove and no restrictions on what computers could compute. Hilbert’s optimism was turned upside down by Goedel and Turing, who established limitations on what mathematics can prove and which problems computers can solve (however, without limiting the impact of mathematics and computer science).

We predict a similar outcome for modern AI and DL, where the limitations of AI (the main topic of Smale’s 18th problem) will be established through the foundations of computational mathematics. We sketch the beginning of such a program by demonstrating how there exist neural networks approximating classical mappings in scientific computing, however, no algorithm (even randomised) can compute such a network to even 1-digit accuracy (with probability better than 1/2). We will also show how instability is inherently present in the methodology of DL, demonstrating that there is no easy remedy. Indeed, there are uncountably many basic classification problems for which there exist a stable neural network, however, the current DL methodology will always produce a neural network that is unstable. Finally, we will address the issue of AI-generated hallucinations, that have become a serious concern in the fastMRI challenge, and show how this phenomenon is related to instability.

Video

WEDNESDAY FEBRUARY 24 - Andrés Almansa (CNRS, Université de Paris, FR)

Title: Solving Inverse Problems by Joint Posterior Maximization with a VAE Prior

Abstract: In this talk we address the problem of solving ill-posed inverse problems in imaging where the prior is a neural generative model. Specifically we consider the decoupled case where the prior is trained once and can be reused for many different log-concave degradation models without retraining. Whereas previous MAP-based approaches to this problem lead to highly non-convex optimization algorithms, our approach computes the joint (space-latent) MAP that naturally leads to alternate optimization algorithms and to the use of a stochastic encoder to accelerate computations. The resulting technique (JPMAP) performs Joint Posterior Maximization using an Autoencoding Prior. We show theoretical and experimental evidence that the proposed objective function is quite close to bi-convex. Indeed it satisfies a weak bi-convexity property which is sufficient to guarantee that our optimization scheme converges to a stationary point. We also highlight the importance of correctly training the VAE using a denoising criterion, in order to ensure that the encoder generalizes well to out-of-distribution images, without affecting the quality of the generative model. This simple modification is key to providing robustness to the whole procedure. Finally we show how our joint MAP methodology relates to more common MAP approaches, and we propose a continuation scheme that makes use of our JPMAP algorithm to provide more robust MAP estimates. Experimental results also show the higher quality of the solutions obtained by our JPMAP approach with respect to other non-convex MAP approaches which more often get stuck in spurious local optima.

Video

WEDNESDAY MARCH 3 - Simon Arridge (UCL, UK)

Title: Coupled Physics Imaging with Sound and Light - Deterministic and Stochastic Approaches

Abstract: Coupled Physics Imaging (CPI) refers to methods that generate contrast through one physical "wave" and generate resolution with a different physical wave. Prototypical examples are PhotoAcoustic Tomography (PAT) and Ultrasound Modulated Acoustic Tomography (UMOT) which both exploit the couplings between acoustic and optical propagation. A variety of methods have been developed for tackling these and related problems using analytical and computational techniques. In this talk I give an overview of some of the methods that we have developed using Compressed Sensing, Machine Learning, and a recent fully stochastic inversion algorithm.

Video

WEDNESDAY MARCH 10 - Luca Calatroni (CNRS, UCA, FR)

Title: On and beyond Total Variation regularisation in imaging: the role of space variance.

Abstract: The use of space-variant regularisation models for inverse imaging problems have become very popular over the last years to improve upon the inability of standard image regularisers to adapt to local features, such as regularisation strength, sharpness and directionality. In this talk we focus on the case of standard Total Variation (TV) regularisation and discuss on how an adaptive mathematical modelling accommodating local regularisation weighting, variable smoothness and anisotropy can improve upon well-known reconstruction drawbacks, being it more tailored to describe local image structures. We further show how these models can be interpreted within the flexible Bayesian framework of Generalised Gaussian Distributions and further combined with maximum likelihood and hierarchical optimisation approaches for efficient hyper-parameter estimation. Our combined modelling is then validated on some standard image restoration problems.

This is joint work with M. Pragliola, A. Lanza and F. Sgallari (University of Bologna).


Video

WEDNESDAY MARCH 17 - Otmar Scherzer (RICAM, AT)

Title: Reconstructions in Coupled Physics Imaging

Abstract: Coupled Physics Imaging (CPI) make use of the coupling of different physical phenomena. For instance in Photoacoustics pulsed laser light triggers sound wave, which are used for tomographic reconstruction. In this talk we present some examples of CPI, such as Photoacoutics, and highlight some mathematical inverse problems, related to it, in particular Quantitative Photoacoustics.

Video

WEDNESDAY MARCH 24 - Marcelo Pereyra (HWU, UK)

Title: Bayesian Imaging using Plug & Play (PnP) priors

Abstract: Plug & Play (PnP) methods have become ubiquitous in Bayesian imaging. These methods derive Minimum Mean Square Error (MMSE) or Maximum A Posteriori (MAP) estimators for imaging inverse problems by combining an explicit likelihood function with a prior that is implicitly defined by an image denoising algorithm. The PnP algorithms proposed in the literature mainly differ in the iterative schemes they use for optimisation or for sampling. In the case of optimisation schemes, some recent works guarantee the convergence to a fixed point, albeit not necessarily a MAP estimate. In the case of sampling schemes, to the best of our knowledge, there is no known proof of convergence. There also remain important open questions regarding whether the underlying Bayesian models and estimators are well defined, well-posed, and have the basic regularity properties required to support these numerical schemes. To address these limitations, this talk presents theory, methods, and provably convergent algorithms for performing Bayesian inference with PnP priors. We introduce two algorithms: 1) PnP Unadjusted Langevin Algorithm for Monte Carlo sampling and MMSE inference, and 2) PnP Play Stochastic Gradient Descent for MAP inference. Using recent results on the quantitative convergence of Markov chains, we establish detailed convergence guarantees for these two algorithms under realistic assumptions on the denoising operators used, with special attention to denoisers based on deep neural networks. We also show that these algorithms approximately target a decision-theoretically optimal Bayesian model that is well-posed. The proposed algorithms are demonstrated on several canonical problems such as image deblurring, inpainting, and denoising, where they are used for point estimation as well as for uncertainty visualisation and quantification.

Video

WEDNESDAY APRIL 7 - Faouzi Triki (UGA, FR)

Title: Hölder stability of quantitative photoacoustic tomography based on partial data.

Abstract: We consider the problem of reconstructing the diffusion and absorption coefficients of the diffusion equation from internal information of the solution. In practice, the internal information is obtained from the first step of the inverse photoacoustic tomography, and is only partially provided near the boundary due to the high absorption property of the medium and the limitation of the equipment. Our main contribution is to prove a Hölder stability of the inverse problem in a subregion where the internal information is reliably supplied based on the stability estimation of a Cauchy problem satisfied by the diffusion coefficient. The exponent of the Hölder stability converges to a positive constant independent of the subregion as the subregion contracts towards the boundary. Numerical experiments demonstrates that it is possible to locally reconstruct the diffusion and absorption coefficients for smooth and even discontinuous media. The presented work has been done in collaboration with Qi Xue.

Video

WEDNESDAY APRIL 14 - Fioralba Cakoni (Rutgers University, US)

Title: On some old and new spectral problems in inverse scattering theory

Abstract: Scattering poles, non-scattering frequencies and transmission eigenvalues are intrinsic to scattering theory for inhomogeneous media. We explain how these sets appear in connection with fundamental properties of the scattering operator and how they relate to each other. In particular, we will discuss how real transmission eigenvalues can be determined from the scattering data and what they say about the material properties of the scattering media. We then present a generic way to modify the scattering data in order to obtain new spectral problems associated with the corresponding modified scattering operator. The goal is to broaden the applicability of the eigenvalue method to inverse problems for absorbing/dispersive media.

Video

WEDNESDAY APRIL 21- Stacey Levine (DUQ, US)

Title: Model-based and Data-driven Geometry-Based Image Denoising

Abstract: Geometric measures of image data have found broad applicability both as regularizers as well as key features in well-posed imaging algorithms. Still, balancing performance vs theoretical guarantees has been an ongoing challenge in utilizing these approaches in practice. Data driven approaches have been outperforming model based approaches with respect to performance, often at the expense of theoretical guarantees, but current work has moved research in the direction of merging these two approaches to obtain the best of both worlds. In this talk we will discuss recent results in both model based and data driven denoising approaches that take advantage of image geometry in a novel way.

WEDNESDAY APRIL 28 - Sung-Ha Kang (GATECH, US)

Title: Vectorization, Decomposition and PDE identification

Abstract: This talk covers a few problems in imaging and inverse problems: image vectorization, image decomposition and PDE identification. First topic, vectorization refers to converting pixelwise bitmap information to scalable vector image files, allowing easy manipulation of the image, e.g., scale changes without losing image quality. We propose a mathematically founded silhouette vectorization algorithm, which extracts the outline of a 2D shape from a raster binary image, and converts it to a combination of cubic Bezier polygons and perfect circles. Secondly, we explore a convex non-convex variational decomposition model which separates a given image into piecewise-constant, smooth homogeneous and noisy/texture components. This model shows versatility in applications, from denoising to spotlight and shadow removal. Finally, we propose methods to identify differential equations from given discrete time dependent data. Identifying unknown equations from given noisy discrete data is a challenging problem. A small amount of noise can make the recovery unstable, and nonlinearity and differential equations with varying coefficients add complexity to the problem. We propose methods based on numerical PDE techniques for stable identification of underlaying PDE.

Video

WEDNESDAY MAY 5 - Nelly Pustelnik (CNRS, ENS, FR)

Title: Joint estimation and contour detection in large scale image processing

Abstract: In standard contour detection procedures, a first step is dedicated to the estimation of the descriptors and a second step aims to extract interfaces from these noisy extracted descriptors. The objective of this presentation is to highlight the benefit of performing both estimation and contour detection in a single step. 

We first illustrate our purpose on the challenging question of textured image segmentation. We will address this question through the coupling between multiresolution analysis and non-smooth optimization tools. The resulting objective function appears to be strongly convex and efficient algorithmic strategies are proposed. We evaluate the performance of the proposed method on large scale images such as those encountered in the study of the dynamics of multiphase flows in porous media.

 Second, we focus on image restoration. We consider a general formulation of the discrete counterpart of the Mumford–Shah functional allowing us to perform jointly image restoration and contour detection. We derive a new proximal alternated minimization scheme, allowing us to deal with the resulting non-convex objective function and with large scale images. The good numerical behavior of the proposed strategy is evaluated and compared to state-of-the-art approaches in image restoration.


Video

WEDNESDAY MAY 12 - Yunan Yang (NYU, US)

Title: Optimal Transport for Inverse Problems and the Implicit Regularization

Abstract: Optimal transport has been one interesting topic of mathematical analysis since Monge (1781). The problem's close connections with differential geometry and kinetic descriptions were discovered within the past century, and the seminal work of Kantorovich (1942) showed its power to solve real-world problems. Recently, we proposed the quadratic Wasserstein distance from optimal transport theory for inverse problems, tackling the classical least-squares method's longstanding difficulties such as nonconvexity and noise sensitivity. The work was soon adopted in the oil industry. As we advance, we discover that the advantage of changing the data misfit is more general in a broader class of data-fitting problems by examining the preconditioning and "implicit" regularization effects of different mathematical metrics as the objective function in optimization, as the likelihood function in Bayesian inference, and as the measure of residual in numerical solution to PDEs.

Video

WEDNESDAY MAY 19 - NO SEMINAR


WEDNESDAY MAY 26 - Sanghyeon Yu (Korea University, KR)

Title: Hybridization of singular plasmons via transformation optics

Abstract: Surface plasmon resonances of metallic nanostructures offer great opportunities to guide and manipulate light on the nanoscale. A central inverse problem in plasmonics is to find the geometry of the nanostructure which would yield the desired resonance spectrum. Despite many advances, the design becomes quite challenging when the desired spectrum is highly complex. In this talk, I will discuss a new theoretical model for surface plasmons which reduces the complexity of the design process significantly. Our model is developed by combining plasmon hybridization theory with transformation optics, giving an efficient way of simultaneously controlling both global and local features of the resonance spectrum. As an application, we propose a design of metasurface whose absorption spectrum can be controlled over a large class of complex patterns through only a few geometric parameters in an intuitive way. Our approach provides fundamental tools for the effective design of plasmonic metamaterials with on-demand functionality. This talk is based on joint works with Habib Ammari (ETHZ).

Video

WEDNESDAY JUNE 2 - Peyman Milanfar (Google Research)

Title: Denoising as a Building Block: Theory and Applications

Abstract: Denoising of images has reached impressive levels of quality -- almost as good as we can ever hope. There are thousands of papers on this topic, and their scope is so vast and approaches so diverse that putting them in some order is useful and challenging. I will speak about why we should still care deeply about this topic, what we can say about this general class of operators on images, and what makes them so special. Of particular interest is how we can use denoisers as building blocks for broader image processing tasks, including as regularizers for general inverse problems.

Video - Slides

WEDNESDAY JUNE 9 - Laurent Seppecher (MI/ICJ, FR)

Title: Stability an discretization techniques for some elliptic inverse parameter problems from internal data in elastography - application to breast tumors detection

Abstract: In this talk, we discuss questions around models and stability for recovering the shear modulus of biological tissues from internal displacement data. While the theoretical stability is not guaranteed in general, as the operator that we aim to inverse may not have closed range, it is possible through a Galerkin approach to construct finite dimensional operators that can be inverted with stability in the L^2 norm. This discretization is built from the so-called Reverse Weak Formulation of the linear elasticity equation. Using a well-chosen pair of finite element spaces which satisfy a generalized discrete inf-sup condition or generalized LBB condition, we provide quantitative error estimates for the inverse problem. The resulting method is efficient as it does not require any iterative resolution of the forward problem and is general as it does not require any smoothness hypothesis for the data nor any additional information at the boundary. We illustrate the proposed method with numerical examples and experimental data as well as in vivo experiments from elasto-static stimulations of breast tumors.

Video

WEDNESDAY JUNE 16 - Yiqiu Dong (DTU, DK)

Title: Model Error Matters – CT reconstruction with uncertain view angles

Abstract: Inverse problems are mathematical problems that arise when one wants to recover “hidden” information from indirect and incomplete measurements/data. An intrinsic difficulty in inverse problems is the inherent ill-posedness or instability, which is reflected in the fact that solutions are very sensitive to noise and errors. In most applications, the mathematical models in inverse problems are approximations or simplifications of the actual physics, which would introduce model errors. Model errors have received much less attention compared with the measurement noise. In this talk, we consider computed tomography (CT) with uncertain measurement geometry with a focus on the case where the view angles are uncertain, and use this case as an example to introduce new methods for handling model errors in inverse problems. A key component of our methods is that we quantify the uncertainty of the view angles via a model-discrepancy formulation, allowing us to take the uncertainty into account in the image reconstruction. Numerical experiments show that our methods are able to improve the relative reconstruction error and visual quality. Our method with view angle estimation is even able to achieve reconstructions whose quality is similar to ones obtained with the correct view angles (the ideal scenario).

WEDNESDAY JUNE 23 - Paul Hand (Northeastern University, US)

Title: Signal Recovery with Generative Priors

Abstract: Recovering images from very few measurements is an important task in imaging problems. Doing so requires assuming a model of what makes some images natural. Such a model is called an image prior. Classical priors such as sparsity have led to the speedup of Magnetic Resonance Imaging in certain cases. With the recent developments in machine learning, neural networks have been shown to provide efficient and effective priors for inverse problems arising in imaging. In this talk, we will discuss the use of neural network generative models for inverse problems in imaging. We will present a rigorous recovery guarantee at optimal sample complexity for compressed sensing and other inverse problems under a suitable random model. We will see that generative models enable an efficient algorithm for phase retrieval from generic measurements with optimal sample complexity. In contrast, no efficient algorithm is known for this problem in the case of sparsity priors. We will discuss strengths, weaknesses, and future opportunities of neural networks and generative models as image priors. These works are in collaboration with Vladislav Voroninski, Reinhard Heckel, Ali Ahmed, Wen Huang, Oscar Leong, Jorio Cocola, Muhammad Asim, and Max Daniels.

WEDNESDAY JUNE 30 - Leon Bungert (HCM, DE)

Title: A Bregman Learning Framework for Sparse Neural Networks

Abstract: I will present a novel learning framework based on stochastic Bregman iterations. It allows to train sparse neural networks with an inverse scale space approach, starting from a very sparse network and gradually adding significant parameters. Apart from a baseline algorithm called LinBreg, I will also speak about an accelerated version using momentum, and AdaBreg, which is a Bregmanized generalization of the Adam algorithm. I will present a statistically profound sparse parameter initialization strategy, stochastic convergence analysis of the loss decay, and additional convergence proofs in the convex regime. The Bregman learning framework can also be applied to Neural Architecture Search and can, for instance, unveil autoencoder architectures for denoising or deblurring tasks.

Organisers

If you need to contact us, please write an e-mail to imagineoneworldseminars[at]gmail[dot]com