The propagation of coherent light through a thick layer of scattering material is an extremely complex physical process. However, it remains linear, and under certain conditions, if the incoming beam is spatially modulated to encode some data, the output as measured on a sensor can be modeled as a random projection of the input, i.e. its multiplication by an iid random matrix. One can leverage this principle for compressive imaging, and more generally for any data processing pipeline involving large-scale random projections. This talk will present a series of proof of concept experiments in machine learning, and discuss recent technological developments of optical co-processors within the startup LightOn.
Joint work with Mohammad Golbabaee.
We consider the convergence of the iterative projected gradient (IPG) algorithm for arbitrary (typically nonconvex) sets and when both the gradient and projection oracles are only computed approximately. We consider different notions of approximation of which we show that the Progressive Fixed Precision (PFP) and (1+epsilon) optimal oracles can achieve the same accuracy as for the exact IPG algorithm. We also show that the former scheme is also able to maintain the (linear) rate of convergence of the exact algorithm, under the same embedding assumption, while the latter requires a stronger embedding condition, moderate compression ratios and typically exhibits slower convergence. We apply our results to accelerate solving a class of data driven compressed sensing problems, where we replace iterative exhaustive searches over large datasets by fast approximate nearest neighbour search strategies based on the cover tree data structure. Finally, if there is time we will give examples of this theory applied in practice for rapid enhanced solutions to an emerging MRI protocol called magnetic resonance fingerprinting for quantitative MRI.
We extend a recent simple classification approach for binary data in order to efficiently classify hierarchical data. In certain settings, specifically, when some classes are significantly easier to identify than others, we showcase computational and ac- curacy advantages.
Many problems in computational science require the approximation of a high-dimensional function from limited amounts of data. For instance, a common task in Uncertainty Quantification (UQ) involves building a surrogate model for a parametrized computational model. Complex physical systems involve computational models with many parameters, resulting in multivariate functions of many variables. Although the amount of data may be large, the curse of dimensionality essentially prohibits collecting or processing enough data to reconstruct such a function using classical approximation techniques. Over the last five years, spurred by its successful application in signal and image processing, compressed sensing has begun to emerge as potential tool for surrogate model construction UQ. In this talk, I will give an overview of application of compressed sensing to high-dimensional approximation. I will demonstrate how the appropriate implementation of compressed sensing overcomes the curse of dimensionality (up to a log factor). This is based on weighted l1 regularizers, and structured sparsity in so-called lower sets. If time, I will also discuss several variations and extensions relevant to UQ applications, many of which have links to the standard compressed sensing theory. These include dealing with corrupted data, the effect of model error, functions defined on irregular domains and incorporating additional information such as gradient data. I will also highlight several challenges and open problems.
In the traditional compressed sensing literature, it is implicitly assumed that one has direct access to noisy analog linear measurements of an (unknown) signal. In reality, these analog measurements need to be quantized to a finite number of bits before they can be transmitted, stored, and processed. In the emerging theory of quantized compressed sensing it is studied how to jointly design a quantizer, measurement procedure, and reconstruction algorithm in order to accurately recover low-complexity signals.
In the popular one-bit compressed sensing model, each linear analog measurement is quantized to a single bit in a memoryless fashion. This quantization operation can be implemented with energy-efficient hardware. There is by now a rich theory available for one-bit compressed sensing with standard Gaussian measurements. Outside of this purely Gaussian setting, very little is known about one-bit compressed sensing. In fact, recovery can in general easily fail for non-Gaussian measurement matrices, even if they are known to perform optimally in "unquantized" compressed sensing.
In my talk, I will show that this picture completely changes if we use dithering, i.e., deliberately add noise to the measurements before quantizing them. By using well-designed dithering, it becomes possible to accurately reconstruct low-complexity signals from a small number of one-bit quantized measurements, even if the measurement vectors are drawn from a heavy-tailed distribution. The reconstruction results that I will present are very robust to noise on the analog measurements as well as to adversarial bit corruptions occurring in the quantization process. If the measurement matrix is subgaussian, then accurate recovery can be achieved via a convex program. The proofs of these reconstruction theorems are based on novel random hyperplane tessellation results.
Based on joint work with Shahar Mendelson (Technion Haifa/ANU Canberra)
Can modern signal processing be used to overcome the diffraction limit? The classical diffraction limit states that the resolution of a linear imaging system is fundamentally limited by one half of the wavelength of light. This implies that conventional light microscopes cannot distinguish two objects placed within a distance closer than 0.5 × 400 = 200nm (blue) or 0.5 × 700 = 350nm (red). This significantly impedes biomedical discovery by restricting our ability to observe biological structure and processes smaller than 100nm. Recent progress in sparsity-driven signal processing has created a powerful paradigm for increasing both the resolution and overall quality of imaging by promoting model-based image acquisition and reconstruction. This has led to multiple influential results demonstrating super-resolution in practical imaging systems. To date, however, the vast majority of work in signal processing has neglected the fundamental nonlinearity of the object-light interaction and its potential to lead to resolution enhancement. As a result, modern theory heavily focuses on linear measurement models that are truly effective only when object-light interactions are weak. Without a solid signal processing foundation for understanding such nonlinear interactions, we undervalue their impact on information transfer in the image formation. This ultimately limits our capability to image a large class of objects, such as biological tissue, that generally are in large-volumes and interact strongly and nonlinearly with light.
The goal of this talk is to present the recent progress in model-based imaging under multiple scattering. We will discuss several key applications including optical diffraction tomography, Fourier Ptychography, and large-scale Holographic microscopy. We will show that all these application can benefit from models, such as the Rytov approximation and beam propagation method, that take light scattering into account. We will discuss the integration of such models into the state-of-the-art optimization algorithms such as FISTA and ADMM. Finally, we will describe the most recent work that uses learned-priors for improving the quality of image reconstruction under multiple scattering.