Compressive Sensing: The Big Picture

Compressive Sensing: The Big Picture

A living document trying to paint the Big Picture in the Compressed Sensing or Compressive Sensing Framework.

[Wired/Slashdot readers, you may want to read this dedicated blog entry first!]

Table of Content:

0. Why this Living Document ?
Compressed Sensing or Compressive Sensing is about acquiring and recovering a sparse signal in the most efficient way possible (subsampling) with the help of an incoherent projecting basis. Unlike traditional sampling methods, Compressed Sensing provides a new framework for acquiring sparse signals in a mutiplexed manner. The main theoretical findings in this recent field have mostly centered on how many multiplexed measurements are necessary to reconstruct the original signal and the attendant nonlinear reconstruction techniques needed to demultiplex these signals. Another equally important thrust in the field has been the actual building of sensing hardware that could produce directly the multiplexed signals.

Information on this technique is growing fast and only few specialists understand how each of these pieces fit each other within the big picture. The Nuit Blanche blog (RSS feed is here) provides daily update on new papers/preprints and ideas coming into the field while this document is slated to be a little less active because the big picture should not change everyday. While currently much emphasis is rightfully spent on performing faster reconstruction of the original signal from the Compressed Measurements, we are also beginning to see the emergence of other tools that complete this framework.

These tools include

  • the ability to search for bases or dictionaries in which sets of signals can be decomposed in a sparse manner and, 
  • the ability to find and quantify specific measurements tools that are incoherent with said dictionaries.
In other words, once a signal is known to be sparse in a specific basis, one of the main challenge is to find a set of measurement tools (producing the compressed measurements) and the attendant nonlinear solver that reconstructs the original full signal. There are theoretical results yielding the minimum number of required measurements needed to produce the original signal given a specific pair of measurement matrices and nonlinear solvers. In all cases, the expected number of compressed measurements is expected to be low relative to traditional Nyquist sampling constraints. This living document aims at categorizing all these tools for the purpose of providing a rapid adoption by the community.

The next paragraphs try to bring the most updated list of different theoretical and applied endeavors implemented in this framework. The fourth paragraph tries to give an exhaustive list of hardware implementations using Compressed Sensing or Compressive Sampling. The fifth paragraph lists a search engine on the subject and the sixth paragraph provides a calendar of activities in the field. If you feel it is not accurate or missing elements please bring them to my attention. Thanks!. This document tries to summarize most of the information found in:

1. Understanding Compressed Sensing

First and foremost, you may want to check this page entitled teaching compressive sensing. If you are new to the field and want to own a program that is subsampling a signal and reconstruct it exactly, then

I have also featured a series of riddles and (magic) tricks that can make Compressive Sensing easier to understand in the CS Riddles page. For more comprehensive examples, you may want to dwell directly into Sparco and its attendant set of examples.
Once you are convinced it works and want to dwell into it, try to watch the excellent lectures videos:

In the same vein, there is a nice tutorial presentation on Compressed Sensing by Richard Baraniuk, Justin Romberg, and Michael Wakin as well as a more in-deph tutorial entitled Compressive sensing - Theory and applications by Petros Boufounos, Justin Romberg and Richard Baraniuk.[ There is now a video of Richard Baraniuk showing his latest introduction to Compressed Sensing at Microsoft Research on August 4, 2008. Please be aware, that the address does not seem to work with Firefox, Chrome or Opera (I tried), it only works with Internet Explorer. The original link that can be seen from any browser is here.] Other presentations that might provide insights include:

Additionally there are two courses that might be complementary to these presentations: 

Now onto the important stuff

2. Dictionaries for Sparse Recovery (How to Make or Find them)

When a signal is said to be sparse in an engineering sense, it really means that the signal is compressible, i.e. it can be expanded in either a small number of terms or in a series with significantly decaying coefficients. In order to produce Compressed Measurements, one first need to know what is the family of functions in which the signal of interest is sparse. Depending on the case, one might be lucky and know that the signal is sparse in a basis found in harmonic analysis (2.1) or one may have to spend some work in devising what these sparse basis is through an algorithm dedicated to finding sparse dictionaries from a set of signal examples(2.2 and 2.3). Finally, Remi Gribonval and Karin Schnass produce some estimate in Dictionary Identification - Sparse Matrix-Factorisation via L1-Minimisation on the number of training examples needed to build a dictionary.

2.1 Basis Functions for which signals are either sparse or compressible

2.2 Algorithms that find dictionaries in which signal representation is sparse are presented in:

Let us note the Matlab Toolbox Sparsity by Gabriel Peyre that has implemented some of these techniques. Knowledge of specific domain signals enables the ability to build these hopefully small dictionaries.

For a review of the state of the art on the subject on how to compile dictionaries from training signals and attendant theoretical issues, check the following document by Remi Gribonval for his Habilitation a Diriger Des Recherches entitled: Sur quelques problèmes mathématiques de modélisation parcimonieuse translated into Sparse Representations: From Source Separation to Compressed Sensing. There is a video and in an audio only format of this presentation in French. The accompanying slides in English are here.

2.3 Data Driven Dictionaries

The next step will almost certainly bring about techniques that find elements within a manifold as opposed to a full set of functions, some sort of Data Driven Dictionary. In this setting, one can list: 

Some of these techniques are being used for dimensionality reduction, which in effect is stating that datasets are compressible when being represented with these dictionaries.

Applying alternating direction method of multipliers for constrained dictionary learning 
  In this segment, the emphasis is on finding a means of acquiring sparse signals using projections onto bases that are incoherent with the bases found in the previous paragraph. However, a criterion for checking whether a specific measurement or encoding matrix will allow the recovery of a sparse solution is needed. For that purpose, an early argument was to expect that certain families of matrices to follow the Restricted Isometry Property (RIP). It is however only a sufficient condition and given a matrix, it is NP-hard to find out if that matrices follows that property. It is also known that the RIP property is also too strict (as shown by Jared Tanner in Phase Transition Phenomenon in Sparse Approximation).

3.1 Measurement Matrix Admissibility Conditions

3.1.1 The Donoho-Tanner Phase Transition

The following phase transition diagram that I call the Donoho-Tanner phase transition is probably the only good means of figuring out if a given measurement / encoding matrix can provide good compressive capabilities with an l_1 solver. One should also note that the phase transition not only depends on the encoding matrix but also on the recovery algorithm used. The following websites provide an interactive way of determining: 

3.1.2 Beyond L_1 minimization and the Donoho-Tanner Phase Transition for Sparse Recovery

Over time, several algorithms have beaten the Donoho-Tanner phase transition in the noiseless case as is shown by Bob Sturm on his Asilomar 2011 slides. These solvers use different heuristics than the L_1 norm to accomplish this. They include solvers such as SL0, IRLp, Greedy Solvers /Matching Pursuits...

but also 
EM-BG-AMP by Jeremy Vila and Phil Schniter:
as well as another very similar solver which uses specific measurement matrices to obtain a new phase diagram:

For more information on this recent developments check the following: 

3.1.3 Other Properties (NSP, RIP,....)

Here is a list of properties that measurement matrices must have in order to enable sparse recovery. Let us note that most of them are sufficient conditions:

Other NP-Hard properties:
Other papers/presentations of interest include:

3.2 Non-Deterministic and Non Adaptive Measurement matrices / Encodings.

The first encoding matrices follow the RIP(2) property while the most recent sparse encoding matrices follow the RIP(1) property. In the following, the first three measurement matrices are pasted from Terry Tao site:



3.3 Non-Deterministic and Adaptive Measurement codes/matrices as mentioned here (as well as new ones).

3.4 Deterministic Subsampling

3.4.1 Deterministic Subsampling with no prior information Fourier <-> Diracs Other pair of bases

3.4.2 Deterministic Subsampling using additional prior information (besides sparsity)

Obviously, with the advent of Data Driven Dictionaries, I would expect we will have specific domain specific measurement matrix methods.

Please note: the most recent solvers are at the end of this list ]

[ For Authors: If your solver is not listed here, please contact me]

The following is a list of nonlinear solvers used to reconstruct the original signal from its compressed measurements / projections on an incoherent basis (as listed above in paragraph 2). Reconstruction codes span a wide series of techniques that include Matching Pursuit/Greedy, Basis Pursuit/Linear Programming, Bayesian, Iterative Thresholding, Proximal- . These links generally go to the Matlab toolbox implementing these techniques. Originally, this list was similar to the list found in the excellent Rice Repository.
Last Updated July 2013

5. Hardware / Compressive Sensing Sensor implementations

Most of these entries are summarized in the Compressive Sensing Sensor / Hardware page. You should go to that page in order to see updated information. One should notice a difference between hardware that does not require modification but different operating procedures and totally new hardware. Hardware implementations vary according to what the hardware can already do. In the case of MRI, one is directly sampling in the Fourier Space and therefore it is directly amenable to subsampling in the Fourier world. Other instrumentations do not sample in the Fourier space and one is led to think of inventive schemes to do that. The most salient features of the random or noiselet measurements is that one can foresee a hardware implementation without having to know the decomposing sparse basis since there is a high probability that these two will be incoherent with each other.

Phase Retrieval

Sampling strategy:

Igor Carron,
Dec 3, 2011, 3:11 PM
Igor Carron,
May 12, 2010, 2:23 PM
Igor Carron,
Jun 1, 2011, 12:07 AM
Igor Carron,
Mar 30, 2010, 1:05 AM
Igor Carron,
Mar 30, 2010, 1:06 AM
Igor Carron,
Feb 4, 2011, 1:26 AM
Igor Carron,
Apr 15, 2011, 8:50 AM
Igor Carron,
Sep 30, 2010, 2:46 PM
Igor Carron,
Jun 17, 2012, 1:41 PM