-- Last update: 17/01/2022
Abstract: Non-convex methods for linear inverse imaging problems with low-dimensional models have emerged as an alternative to convex techniques. We propose a theoretical framework where both finite dimensional and infinite dimensional linear inverse problems can be studied. This framework recovers existing results about low-rank matrix factorization and off-the-grid sparse spike estimation, and it provides new results for Gaussian mixture estimation from linear measurements.
(Joint work with Jean-François Aujol and Arthur Leclaire)
Abstract: High-resolution FMCW radar systems are becoming an integral aspect of applications ranging from automotive safety and autonomous driving to health monitoring of infants and the elderly. This integration provides challenging scenarios that require radars with extremely high dynamic range (HDR) ADCs; these ADCs need to avoid saturation while offering high-performance and high-fidelity data-acquisition. The recent concept of Unlimited Sensing allows one to achieve high dynamic range (HDR) acquisition by recording low dynamic range, modulo samples. Interestingly, oversampling of these folded measurements, with a sampling rate independent of the modulo threshold, is sufficient to guarantee their perfect reconstruction for band-limited signals. This contrasts with the traditional methodology of increasing the dynamic range by adding a programmable-gain amplifier or operating multiple ADCs in parallel. This paper demonstrates an FMCW radar prototype that utilises the unlimited sampling strategy. Our hardware experiments show that even with the use of a modulo measurements of lower precision, the US reconstruction is able to match the performances of the conventional acquisition. Furthermore, our real-time processing capability demonstrates that our “proof-of-concept” approach is a viable solution for HDR FMCW radar signal processing, thus opening a pathway for future hardware-software optimization and integration of this technology with other mainstream systems.
This is a joint work with Ayush Bhandari (Imperial College, London, UK)
Abstract: Expressing a matrix as the sum of a low-rank matrix plus a sparse matrix is a flexible model capturing global and local features in data. This model is the foundation of robust principle component analysis, and popularized by dynamic-foreground/static-background separation amongst other applications. In this talk we develop guarantees showing that rank-r plus sparsity-s matrices can be recovered by computationally tractable methods from p=O(r(m+n-r)+s)log(mn/s) linear measurements. We establish that the restricted isometry constants for the low-rank plus sparse matrix set remain bounded independent of the problem size provided p/mn, s/p, and r(m+n-r)/p remain fixed. The developed theory and algorithms also apply to the fully observed case of Robust PCA.
Abstract: In the past five years, deep learning methods have become state-of-the-art in solving various inverse problems. Before such approaches can find application in safety-critical fields, a verification of their reliability appears mandatory. Recent works have pointed out instabilities of deep neural networks for several image reconstruction tasks. In analogy to adversarial attacks in classification, it was shown that slight distortions in the input domain may cause severe artifacts. In this talk, we will shed new light on this concern and deal with an extensive empirical study of the robustness of deep-learning-based algorithms for solving underdetermined inverse problems. This covers compressed sensing with Gaussian measurements as well as image recovery from Fourier and Radon measurements, including a real-world scenario for magnetic resonance imaging (using the NYU-fastMRI dataset). Our main focus is on computing adversarial perturbations of the measurements that maximize the reconstruction error. In contrast to previous findings, our results reveal that standard end-to-end network architectures are not only surprisingly resilient against statistical noise, but also against adversarial perturbations. Remarkably, all considered networks are trained by common deep learning techniques, without sophisticated defense strategies. If time permits, we will also relate our results to the aspect of accuracy, which is discussed in the context of the recent AAPM Sparse-View CT Challenge.
Abstract: In this work, we present several variants of the deep neural network named DeepPDNet, built from primal-dual proximal iterations in a context of image restoration. We reformulate specific instances of Condat-Vũ primal-dual hybrid gradient (PDHG) algorithm as a deep network with fixed layers. Each layer corresponds to one iteration of the primal-dual algorithm. The learned parameters are both the PDHG algorithm step-sizes and the analysis linear operator involved in the penalization (including the regularization parameter). These parameters are allowed to vary from a layer to another one. First, a focus on DeepPDNet with gradient step activation will be provided. Two different learning strategies: “Full learning” and “Partial learning” are proposed, the first one is the most efficient numerically while the second one relies on standard constraints ensuring convergence of the standard PDHG iterations. Moreover, global and local sparse analyses prior are studied to seek a better feature representation. We apply the proposed methods to image restoration on the MNIST and BSD68 datasets. Second, an alternative deep network designed from only proximal activations of PDHG, related to Chambolle-Pock iterations, is designed and compared to the previous scheme.
Abstract: Given a set of data points belonging to the convex hull of a set of vertices, a key problem in data analysis and machine learning is to estimate these vertices in the presence of noise.
Many algorithms have been developed under the assumption that there is at least one nearby data point to each vertex; two of the most widely used ones are vertex component analysis (VCA) and the successive projection algorithm (SPA).
This assumption is known as the pure-pixel assumption in blind hyperspectral unmixing, and as the separability assumption in nonnegative matrix factorization.
More recently, Bhattacharyya and Kannan (ACM-SIAM Symposium on Discrete Algorithms, 2020) proposed an algorithm for learning a latent simplex (ALLS) that relies on the assumption that there is more than one nearby data point for each vertex.
In that scenario, ALLS is probalistically more robust to noise than algorithms based on the separability assumption.
In this paper, inspired by ALLS, we propose smoothed VCA (SVCA) and smoothed SPA (SSPA) that generalize VCA and SPA by assuming the presence of several nearby data points to each vertex.
We illustrate the effectiveness of SVCA and SSPA over VCA, SPA and ALLS on synthetic data sets, and on the unmixing of hyperspectral images.