Compressive Sensing: The Big Picture
A living document trying to paint the Big Picture in the Compressed Sensing or Compressive Sensing Framework.
[Wired/Slashdot readers, you may want to read this dedicated blog entry first!]
Table of Content:
0. Why this Living Document ?
or Compressive Sensing
is about acquiring and recovering a sparse signal in the most efficient way possible (subsampling) with the help of an incoherent projecting basis. Unlike traditional sampling methods, Compressed Sensing
provides a new framework for acquiring sparse signals in a mutiplexed manner. The main theoretical findings in this recent field have mostly centered on how many multiplexed measurements are necessary to reconstruct the original signal and the attendant nonlinear reconstruction techniques needed to demultiplex these signals. Another equally important thrust in the field has been the actual building of sensing hardware
that could produce directly the multiplexed signals.
Information on this technique is growing fast and only few specialists understand how each of these pieces fit each other within the big picture. The Nuit Blanche blog
(RSS feed is here
) provides daily update on new papers/preprints and ideas coming into the field while this document is slated to be a little less active because the big picture should not change everyday. While currently much emphasis is rightfully spent on performing faster reconstruction of the original signal from the Compressed Measurements, we are also beginning to see the emergence of other tools that complete this framework.
These tools include
- the ability to search for bases or dictionaries in which sets of signals can be decomposed in a sparse manner and,
- the ability to find and quantify specific measurements tools that are incoherent with said dictionaries.
In other words, once a signal is known to be sparse in a specific basis, one of the main challenge is to find a set of measurement tools (producing the compressed measurements) and the attendant nonlinear solver that reconstructs the original full signal. There are theoretical results yielding the minimum number of required measurements needed to produce the original signal given a specific pair of measurement matrices and nonlinear solvers. In all cases, the expected number of compressed measurements is expected to be low relative to traditional Nyquist sampling constraints. This living document aims at categorizing all these tools for the purpose of providing a rapid adoption by the community.
The next paragraphs try to bring the most updated list of different theoretical and applied endeavors implemented in this framework. The fourth paragraph tries to give an exhaustive list of hardware implementations using Compressed Sensing or Compressive Sampling. The fifth paragraph lists a search engine on the subject and the sixth paragraph provides a calendar of activities in the field. If you feel it is not accurate or missing elements please bring them to my attention
. Thanks!. This document tries to summarize most of the information found in:
I have also featured a series of riddles and (magic) tricks that can make Compressive Sensing easier to understand in the CS Riddles page
. For more comprehensive examples, you may want to dwell directly into Sparco
and its attendant set of examples
Once you are convinced it works and want to dwell into it, try to watch the excellent lectures videos:
In the same vein, there is a nice tutorial presentation on Compressed Sensing
by Richard Baraniuk
, Justin Romberg
, and Michael Wakin
as well as a more in-deph tutorial entitled Compressive sensing - Theory and applications
by Petros Boufounos
, Justin Romberg
and Richard Baraniuk
.[ There is now a video of Richard Baraniuk showing his latest introduction to Compressed Sensing
at Microsoft Research on August 4, 2008. Please be aware, that the address does not seem to work with Firefox, Chrome or Opera (I tried), it only works with Internet Explorer. The original link that can be seen from any browser is here
.] Other presentations that might provide insights include:
Additionally there are two courses that might be complementary to these presentations:
Now onto the important stuff
2. Dictionaries for Sparse Recovery (How to Make or Find them)
When a signal is said to be sparse in an engineering sense, it really means that the signal is compressible, i.e. it can be expanded in either a small number of terms or in a series with significantly decaying coefficients. In order to produce Compressed Measurements, one first need to know what is the family of functions in which the signal of interest is sparse. Depending on the case, one might be lucky and know that the signal is sparse in a basis found in harmonic analysis (2.1) or one may have to spend some work in devising what these sparse basis is through an algorithm dedicated to finding sparse dictionaries from a set of signal examples(2.2 and 2.3). Finally, Remi Gribonval
and Karin Schnass
produce some estimate in Dictionary Identification - Sparse Matrix-Factorisation via L1-Minimisation
on the number of training examples needed to build a dictionary.
2.1 Basis Functions for which signals are either sparse or compressible
2.2 Algorithms that find dictionaries in which signal representation is sparse are presented in:
- CoefROMP: Improving Dictionary Learning: Multiple Dictionary Updates and Coefﬁcient Reuse by Leslie N. Smith and Michael Elad.
- ODSL/Group-structured dictionaries: Online Group-Structured Dictionary Learning by Zoltan Szabo, , Barnabás Póczos, and András Lőrincz: (.rar,zip,.tar)
- BPFA: Nonparametric Bayesian Dictionary Learning for Analysis of Noisy and Incomplete Images by Mingyuan Zhou, Haojun Chen, John Paisley, Lu Ren, Lingbo Li, Zhengming Xing, David Dunson, Guillermo Sapiro and Lawrence Carin.
- PADDLE is a Python package for learning dictionaries Curzio Basso
- Online Learning for Matrix Factorization and Sparse Coding by Julien Mairal, Francis Bach, Jean Ponce, Guillermo Sapiro [The code is released as SPArse Modeling Software or SPAMS]
- Dictionary Learning Algorithms for Sparse Representation (Matlab implementation of FOCUSS/FOCUSS-CNDL is here)
- Multiscale sparse image representation with learned dictionaries [Matlab implementation of the K-SVD algorithm is here, a newer implementation by Ron Rubinstein is here ]
- Efficient sparse coding algorithms [ Matlab code is here ]
- Non-negative Sparse Modeling of Textures (NMF) [Matlab implementation of NMF (Non-negative Matrix Factorization) and NTF (Non-negative Tensor), a faster implementation of NMF can be found here, here is a more recent Non-Negative Tensor Factorizations package]
- Shift Invariant Sparse Coding of Image and Music Data. Matlab implemention is here
- MoTIF : an Efficient Algorithm for Learning Translation Invariant Dictionaries, but also Learning Multi-Modal Dictionaries.Also, more recent: Shift-invariant dictionary learning for sparse representations: extending K-SVD.
- Thresholded Smoothed-L0 (SL0) Dictionary Learning for Sparse Representations by Hadi Zayyani, Massoud Babaie-Zadeh and Remi Gribonval.
Let us note the Matlab Toolbox Sparsity
by Gabriel Peyre
that has implemented some of these techniques. Knowledge of specific domain signals enables the ability to build these hopefully small dictionaries.
For a review of the state of the art on the subject on how to compile dictionaries from training signals and attendant theoretical issues, check the following document by Remi Gribonval
for his Habilitation a Diriger Des Recherches
entitled: Sur quelques problèmes mathématiques de modélisation parcimonieuse
translated into Sparse Representations: From Source Separation to Compressed Sensing. There is a video
and in an audio
only format of this presentation in French. The accompanying slides in English are here
2.3 Data Driven Dictionaries
The next step will almost certainly bring about techniques that find elements within a manifold as opposed to a full set of functions, some sort of Data Driven Dictionary. In this setting, one can list:
Some of these techniques are being used for dimensionality reduction, which in effect is stating that datasets are compressible when being represented with these dictionaries.
Applying alternating direction method of multipliers for constrained dictionary learning
In this segment, the emphasis is on finding a means of acquiring sparse signals using projections onto bases that are incoherent with the bases found in the previous paragraph
. However, a criterion for checking whether a specific measurement or encoding matrix will allow the recovery of a sparse solution
is needed. For that purpose, an early argument was to expect that certain families of matrices to follow the Restricted Isometry Property (RIP). It is however only a sufficient condition and given a matrix, it is NP-hard to find out if that matrices follows that property. It is also known that the RIP property is also too strict (as shown
by Jared Tanner
in Phase Transition Phenomenon in Sparse Approximation).
3.1 Measurement Matrix Admissibility Conditions
3.1.1 The Donoho-Tanner Phase Transition
The following phase transition diagram that I call the Donoho-Tanner phase transition
is probably the only good means of figuring out if a given measurement / encoding matrix can provide good compressive capabilities with an l_1 solver. One should also note that the phase transition not only depends on the encoding matrix but also on the recovery algorithm used. The following websites provide an interactive way of determining:
3.1.2 Beyond L_1 minimization and the Donoho-Tanner Phase Transition for Sparse Recovery
Over time, several algorithms have beaten the Donoho-Tanner phase transition in the noiseless case as is shown by Bob Sturm on his Asilomar 2011 slides. These solvers use different heuristics than the L_1 norm to accomplish this. They include solvers such as SL0, IRLp, Greedy Solvers /Matching Pursuits...
but also EM-BG-AMP by Jeremy Vila and Phil Schniter:
as well as another very similar solver which uses specific measurement matrices to obtain a new phase diagram:
For more information on this recent developments check the following:
3.1.3 Other Properties (NSP, RIP,....)
Here is a list of properties that measurement matrices must have in order to enable sparse recovery. Let us note that most of them are sufficient conditions:
- NSP: (Null Space Property) as featured in David Donoho in Compressed Sensing and Compressed sensing and best k-term approximation by Albert Cohen, Wolfgang Dahmen, and Ronald DeVore. Sufficient conditions for the Null Space Property can be computed with the following two approaches (JN and AE):
- NSP: Anatoli Juditsky and Arkadii Nemirovski found a sufficient condition that can be computed in On Verifiable Sufficient Conditions for Sparse Signal Recovery via l_1 Minimization. The code necessary to perform this verification is located at: http://sites.google.com/site/testsensingmatrices/ ( more information can be found here)
- NSP: Alexandre d'Aspremont and Laurent El Ghaoui are computing the same constant as JN with SDP in Testing the Nullspace Property using Semidefinite Programming. The NSProp implementation is now available here: http://www.princeton.edu/~aspremon/NSPcode.htm
- SCSP, WCSP, WP: as featured in A Remark on Compressed Sensing by Kashin and Vladimir Temlyakov
- Incoherence: as shown in Sparsity and incoherence in compressive sampling by Emmanuel Candès and Justin Romberg.
- RIP In Combining Geometry And Combinatorics: A Unified Approach to Sparse Signal Recovery by Radu Berinde, Anna Gilbert, Piotr Indyk, Howard Karloff and Martin Strauss , the authors show the possibility of unifying the different measurement matrices found so far. Hence, the Restricted Isometry Property can be broken in RIP(1) or RIP(2) properties. However, when one talks about RIP, one generally refers to RIP(2). Even though, this property is NP-hard to prove, an attempt to provide some bound of the Restricted Isometry Constant can be found in: Testing the Nullspace Property using Semidefinite Programming by Alexandre d'Aspremont and Laurent El Ghaoui ..Two more recent papers supply bounds for the Gaussian ensemble: Compressed Sensing: How Sharp is the Restricted Isometry Property ? by Jeffrey Blanchard, Coralia Cartis, and Jared Tanner and Improved Bounds on Restricted Isometry Constants for Gaussian Matrices by Bubacarr Bah and Jared Tanner.
- StRIP: As featured by Robert Calderbank, Stephen Howard and Sina Jafarpour in Construction of a Large Class of Deterministic Sensing Matrices that Satisfy a Statistical Isometry Property. This property is computable as mentioned here but it seemed to be restricted to a certain set of encoding matrices.
- SRIP: As featured by Shamgar Gurevich and Ronny Hadani in Incoherent dictionaries and the Statistical Restricted Isometry property.
- GRIP: As featured by Jarvis Haupt and Robert Nowak in this technical report entitled A generalized restricted isometry property (more here)
- RIP_p: Laurent Jacques, David Hammond and M. Jalal Fadili mention it in their paper about Dequantizing CS. The authors used the RIP_p to characterize the performance of the BPDQ decoder. RIP_p is there just the "sandwiching" of the Lp norm (with p>=2) of the random projection of sparse signals x, i.e. Phi x, between their L2 norm multiplied by the common (1+- delta)^1/2 factors, so that RIP_2 is the common RIP. Similarly, Rick Chartrand and Valentina Staneva, in "Restricted isometry properties and nonconvex compressive sensing" introduced also an RIP_p definition, with an existence proof for Gaussian matrices, but for 0 \lt p \le 1. (see this entry)
- UUP: Terry Tao has a set of properties listed here (this is an early version)
Other NP-Hard properties:
Other papers/presentations of interest include:
- Compressed Sensing II, Ronald DeVore
- Ron DeVore's short course ('08)
- Compressed Sensing: Lecture I, Ronald DeVore
- Compressed Sensing: Lecture II, Ronald DeVore
- Compressed Sensing: Lecture III, Ronald DeVore
- Decoding in Compressed Sensing, Albert Cohen, Wolfgang Dahmen, Ronald DeVore, Guergana Petrova, Przemek Wojtaszczek
- Open problem on properties.
- Some stories about Lp-minimization, the Restricted Isometry Property, and excessive pessimism by Remi Gribonval
3.2 Non-Deterministic and Non Adaptive Measurement matrices / Encodings.
The first encoding matrices follow the RIP(2) property while the most recent sparse encoding matrices follow the RIP(1) property. In the following, the first three measurement matrices are pasted from Terry Tao site
3.3 Non-Deterministic and Adaptive Measurement codes/matrices as mentioned here (as well as new ones).
3.4 Deterministic Subsampling
3.4.1 Deterministic Subsampling with no prior information
188.8.131.52 Fourier <-> Diracs
184.108.40.206 Other pair of bases
3.4.2 Deterministic Subsampling using additional prior information (besides sparsity)
Obviously, with the advent of Data Driven Dictionaries, I would expect we will have specific domain specific measurement matrix methods.
[ Please note: the most recent solvers are at the end of this list ]
[ For Authors: If your solver is not listed here, please contact me]
The following is a list of nonlinear solvers used to reconstruct the original signal from its compressed measurements / projections on an incoherent basis (as listed above in paragraph 2). Reconstruction codes span a wide series of techniques that include Matching Pursuit/Greedy, Basis Pursuit/Linear Programming, Bayesian, Iterative Thresholding, Proximal- . These links generally go to the Matlab toolbox implementing these techniques. Originally, this list was similar to the list found in the excellent Rice Repository
- l1-Magic by Emmanuel Candes and Justin Romberg
- SparseLab by David Donoho, Iddo Drori, Victoria Stodden, Yaakov Tsaig, Morteza Shahram with major contributions from: Michael Saunders, Joshua Sweetkind-Singer, Michael Elad, Lawrence Carin, Shihao Ji, Ya Xue, David Dunson. (version 2.0)
- GPSR by Mário Figueiredo, Robert D. Nowak, Stephen J. Wright. (version 6.0)
- ell-1 LS: Simple Matlab Solver for ell-1-Regularized Least Squares Problems by Kwangmoo Koh, Seung-Jean Kim, and Stephen Boyd (version beta)
- Sparsify by Thomas Blumensath (version 0.4)
- MPTK: Matching Pursuit Toolkit, with two packages based on MPTK by Sacha Krstulovic and Remi Gribonval (version 0.5.6)
- Bayesian Compressive Sensing(BCS): also includes TSW-CS (release Jan 19, 2009) implemented by an MCMC approach and VB-BCS implementing the basic BCS implemented via a variational Bayesian approach. Implemented by Shihao Ji, Ya Xue, David Dunson, Yuting Qi, Dehong Liu, Lihan He and Lawrence Carin. (Version 0.1)
- SPGL1: A solver for large scale sparse reconstruction by Michael Friedlander and Ewout van Der Berg (Version 1.6)
- FPC by Elaine Hale, Wotao Yin and Yin Zhang (Version 1.0)
- FPC_AS by Zaiwen Wen,Wotao Yin (Version 1.0)
- Chaining Pursuit by Joel Tropp
- Regularized OMP (ROMP) by Deanna Needell and Roman Vershynin
- TwIST by José Bioucas-Dias and Mário Figueiredo.
- SparSA by Stephen Wright, Robert Nowak, Mario Figueiredo. (version 2.0)
- SALSA and C-SALSA by Manya Afonso, Mário Figueiredo, José Bioucas-Dias,
- Bregman Iterative Algorithms for Constrained Compressed Sensing and Related Problems by Wotao Yin, Stanley Osher, Donald Goldfarb and Jerome Darbon. It uses a modification of FPC. You need to request it from the site (more specifically here)
- The Split Bregman Method for L1 Regularized Problems. The associated C++ code is here, a new Matlab/mex version is here. By Tom Goldstein and Stan Osher.
- Applying the Proximal Point Algorithm to a Non Negative Basis Pursuit Denoising model by François Malgouyres and Tieyong Zeng. The software is here . It features Matlab codes, scripts and results (30 MB). The scripts are compatible with the SPARCO toolbox.
- Sparsest solutions of underdetermined linear systems via -minimization for . The code is here. By Simon Foucart and Ming-Jun Lai
- Chambolle's algorithm for the resolution of compressed sensing with TV regularization by Gabriel Peyre. This algorithm is included in the larger Toolbox Sparsity - A toolbox for sparse coding and sparse regularization
- Fast Bayesian Matching Pursuit (FBPM)by Justin Ziniel, Lee C. Potter, Philip Schniter. The code is located here.
- SAMP: Sparsity Adaptive Matching Pursuit for Practical Compressed Sensing : main page and attendant matlab code. By Thong T. Do, Lu Gan, Nam Nguyen, and Trac D. Tran.
- Smoothed L0 (SL0) Algorithm for Sparse Decomposition by G. Hosein Mohimani, Massoud Babaie-Zadeh and Christian Jutten (local version of sl0.m as provided G. Hosein Mohimani ). Now works with complex functions.
- Convex Iteration by Jon Dattorro. It is discussed in his book on Convex Optimization  and specifically in Chapter 4 page 306 to 314.The Matlab algorithm is available here.
- Gradient based method for cone programming with application to large-scale compressed sensing by Zhaosong Lu. The attendant Matlab source codes are here.
- Sparse Matching Pursuit (SMP) and Count-Min recovery algorithm as featured in the Sparse Recovery Experiments wiki of Piotr Indyk and Radu Berinde
- Alternating l_1 reconstruction algorithm, presented by Stephane Chretien in Tighter relaxations for the sparsest recovery problem. The page for the code is here.
- Non-negative Underdetermined Iteratively Reweighted Least Squares (NUIRLS) code by Paul O’Grady and Scott Rickard as described in Compressive Sampling of Non-Negative Signals.
- Compressive Sensing via Belief Propagation website and code (CSBP) by Dror Baron.
- Compressed Signal Reconstruction using the Correntropy Induced Metric by Sohan Seth, Jose C. Principe earlier. Sohan Seth also released the code implemented in the article.
- RecPF (Version 2.0, May 25th, 2009 ): A MATLAB code for image reconstruction from partial Fourier data that solves models with total-variation and l_1 regularization and an l_2-norm fidelity to fit the available incomplete Fourier data. Developed by Yin Zhang, Junfeng Yang and Wotao Yin.
- FTVd (version 3.0, January 6th, 2009): A MATLAB code for image deblurring and denoising that solves the model with total-variation regularization and l_2-norm fidelity. Developed by Yin Zhang, Junfeng Yang, Yilun Wang and Wotao Yin.
- A Coordinate Gradient Descent Method for l_1-regularized Convex Minimization by Sangwoon Yun and Kim-Chuan Toh Matlab code of the CGD method for L1-regularized linear least squares problem., Matlab code of the CGD method for L1-regularized logistic regression problem.
- Iteratively Reweighted Lp as implemented in Iteratively Re-weighted Least Squares Minimization for Sparse Recovery by Ingrid Daubechies, Ronald DeVore, Massimo Fornasier, and C. Sinan Güntürk : irwls_main.m,it_min_l2_w_fast.m
- Two Step Reweighted l1 code from Improved Sparse Recovery Thresholds with Two-Step Reweighted $\ell_1$ Minimization by M. Amin Khajehnejad, Weiyu Xu, Salman Avestimehr, Babak Hassibi.
- Angshul Majumdar sent me a couple of Matlab files for solving lp analysis and synthesis prior algorithms. Both are first order algorithms unlike the IRLS. To a large extent they are based on the Majorization-Minimization approach, which I learnt from the Selesnick and Figueredo's paper: LpAnalysis.m , LpSynthesis.m,
- Lp_re, Iteratively Reweighted Lp - This is my implementation but a better one might be available from the authors. Implemented from Iteratively Reweighted Algorithms for Compressive Sensing by Rick Chartrand and Wotao Yin
- CSRec_SP.m. Subspace Pursuit original code as provided one of the author Wei Dai. ( Subspace Pursuit for Compressive Sensing: Closing the Gap Between Performance and Complexity [*])
- COSAMP: cosamp.m implemented by David Mary . A better version by Bob Sturm can be found here. [as described in CoSaMP: Iterative signal recovery from incomplete and inaccurate samples by Deanna Needell and Joel Tropp. [*]
- S-POCS, Using Non Convex Sparse Signal Space Instead of the Convex l1 Ball Improves POCS Recovery introduced by Laurent Jacques
- Kalman Filter Compressed Sensing page by Namrata Vaswani, Wei Lu, Chenlu Qiu
- ExCoV page by Aleksandar Dogandžić and Kun Qiu.
- Angshul Majumdar has implemented some greedy algorithms for Group Sparsity. Following the works of Eldar et al in
 Y. C. Eldar and M. Mishali, "Robust Recovery of Signals From a Union of Subspaces", arXiv.org 0807.4581, submitted to IEEE Trans. Inform. Theory, July 2008.
 Block-Sparsity: Coherence and Efficient Recovery by Yonina C. Eldar, Helmut Bolcskei to appear in ICASSP 2009.
Four algorithms - Block Orthogonal Matching Pursuit (BOMP), and three others, namely:
- GOMP - Group Orthogonal Matching Pursuit (similar to BOMP, except for decides the group by average correlation of each group instead of highest correlation)
- StGOMP - Stagewise GOMP; combining ideas of BOMP with StOMP
- ReGOMP - Regularized GOMP; combining ideas of ROMP and GOMP.
- All these codes are accessible here.
- Angshul Majumdar also implemented the original OLS algorithm proposed in T. Blumensath, M. E. Davies; "On the Difference Between Orthogonal Matching Pursuit and Orthogonal Least Squares", manuscript 2007. Their algorithm is implemented in SparseOLS.zip. As well as two modified versions: ROLS - Regularized OLS (following ideas similar to ROMP), StOLS - Stagewise OLS (following ideas similar to StOMP)
- Fast Bayesian Compressive Sensing using Laplace Priors. by S. Derin Babacan, Rafael Molina, and Aggelos Katsaggelos. The source code can be found on the project webpage.
- Non-Iterative Computation of Sparsifiable Solutions to Underdetermined Kronecker Product Linear Systems of Equations by Andrew Yagle. The code is listed in the blog
- YALL1 (version beta-4, March26, 2009), Your ALgorithms for L1 by Yin Zhang
- NESTA : (version 1.0, June 3, 2009) A Fast and Accurate First-order Method for Sparse Recovery by Jérôme Bobin, Stephen Becker, and Emmanuel Candès.
- The Two Stage l1 Approach to the Compressed Sensing Problem by Stephane Chretien. The code is here (June 10th, 2009)
- Dirk Lorenz just sent me an e-mail:
I just wanted to let you know that I did what I should have done months or years ago: I put m-files for l^p and especially l^1-minimization on my page (see the following three m files)
- iter_thresh.m which implements the usual iterated soft thresholding for different values of p and also a sophisticated other algorithm which we called iterated hard thresholding (not to be confused with the algorithm proposed by Blumensath and Davies) . As presented in Iterated hard shrinkage for minimization problems with sparsity constraints by Kristian Bredies and Dirk A. Lorenz ).
- ssn.m which implements the semismooth Newton method for l^1-minimization and which is incredibly fast for the cases in which it converges (unfortunately this is not always the case... convergence is indeed only local) ( As presented in A Semismooth Newton Method for Tikhonov Functionals with Sparsity Constraints by Roland Griesse and Dirk A. Lorenz).
- ppp_l1.m which implements the projection proximal point method for l^1 minimization presented in Numerical Functional Analysis and Optimization).
- Angshul Majumdar also sent me the following:
This is the code for solving problems of the form
The code is here: lmp_re_ls.m
min ||x||_m,p s.t. y = Ax
I have replaced the ||.||_2,p norm with the ||.||_m,p norm for the group sparsity problem.
It uses the same Re-weighted Least Squares method but with a different weighing method.
The jth index of the ith group is weighed by (||x(j)||_m)^(p-m) . |x(i,j)|^(m-2).
I derived it using the FOCUSS methodology.
- The BPDQ Toolbox by David Hammond, Laurent Jacques and M. Jalal Fadili . The Basis Pursuit Dequantizers for recovery of sparse signals from quantized random measurements.
- The GraDes algorithm is now available here. It was featured in Gradient Descent with Sparsification: An iterative algorithm for sparse recovery with restricted isometry property by Rahul Garg and Rohit Khandekar.
- proMP.m an implementation of Probabilistic Matching Pursuit for Compressive Sensing by Atul Divekar and Okan Ersoy (to be used with pbp.m )
- SLEP (Sparse Learning with Efficient Projections) package by Jun Liu and Jieping Ye provides a set of programs for sparse learning:
- ℓ1-regularized (constrained) sparse learning
- ℓ1/ℓq-regularized sparse learning (q>1)
- Fussed Lasso
- Sparse inverse covariance estimation
- Trace norm regularized learning
- Simon Foucart
released the Hard Thresholding Pursuit (HPT) code featured in Hard
thresholding pursuit: an algorithm for Compressive Sensing. The code
- Model-based Compressive Sensing Toolbox: Richard G. Baraniuk, Volkan Cevher, Marco F. Duarte, Chinmay Hegde (as featured in Model-Based Compressive Sensing)
- RFSS.m is the code implemented in: Elastic-Net Regularization: Error estimates and Active Set Methods.by Jin, Bangti, Lorenz, Dirk A., Schiffler, Stefan.
- Threshold-ISD by Yilung Wang and Wotao Yin
- PD-Sparse: Penalty Decomposition Methods for $l_0$-Norm Minimization by Zhaosong Lu,Yong Zhang.
- TFOCS (Templates for
First-Order Conic Solvers ) by Stephen Becker, Emmanuel J. Candes and Michael Grant.
- Cyclic Matching Pursuit by Bob Sturm
- Compressed Sensing Image Reconstruction via Recursive Spatially Adaptive Filtering by Aram Danielyan, Karen Egiazarian, Alessandro Foi, Vladimir Katkovnik
- ALPS: accelerated IHT methods for sparse recovery by Volkan Cevher and Anastasios Kyrillidis
- Sparse Bayesian learning exploiting temporal correlation: T-SBL/T-MSBL (MMV algorithms) by Zhilin Zhang and Bhaskar D. Rao.
- tMFOCUSS [temporal M-FOCUSS, which exploits temporal correlation] by Zhilin Zhang and Bhaskar D. Rao.
- ARSBL [AR model based sparse Bayesian learning] by Zhilin Zhang and Bhaskar D. Rao.
- Binary Iterative Hard Thresholding (BIHT) Demo by Laurent Jacques, Jason Laska, Petros Boufounos, and Richard Baraniuk
- Approximate Message Passing (AMP) Code for Compressive Sensing by Ulugbek Kamilov
- The Generalized Approximate Message Passing (GAMP) MATLAB package by Sundeep Rangan, Alyson Fletcher, Vivek Goyal, Ulugbek Kamilov, Jason Parker, Phil Schniter.
- Compressive Imaging using Turbo AMP by Subhojit Som and Phil Schniter,
- Spectral compressive sensing toolbox by Marco Duarte and Richard Baraniuk
- Recursive Projected Compressive Sensing by Chenlu Qiu, Namrata Vaswani
- Distributed Basis pursuit ( BPinSN) : From Distributed Basis Pursuit by João F. C. Mota, João M. F. Xavier, Pedro M. Q. Aguiar, and Markus Püschel.
- RecPC by Wotao Yin , Simon Morgan (Los Alamos National Lab), Junfeng Yang, Yin Zhang
- CLASH: The algorithm is presented in "Combinatorial Selection and Least Absolute Shrinkage via the CLASH Algorithm", Technical Report, by Anastasios Kyrillidis and Volkan Cevher (corresponding ICML 2011 Structured Sparsity Workshop slides can be found here).
- TurboGAMP: joint channel-estimation/equalization/decoding of BICM-OFDM signals transmitted over channels with clustered-sparse impulse responses by Phil Schniter.
- ISAL1: Infeasible-Point Subgradient Algorithm for L1-Minimization by Dirk Lorenz, Marc Pfetsch and Andreas Tillmann (SPEAR Project page)
- A*OMP: A* Orthogonal Matching Pursuit: Best-First Search for Compressed Sensing Signal Recovery by Nazim Burak Karahanoglu and Hakan Erdogan
- SpaRCS: SpaRCS: Recovering Low-rank and Sparse matrices from Compressive Measurements by Andrew E. Waters, Aswin C. Sankaranarayanan, and Richard G. Baraniuk
- SA-MUSIC: Subspace Methods for Joint Sparse Recovery by Kiryung Lee and Yoram Bresler
- EM-BG-AMP:: Expectation-Maximization Bernoulli-Gaussian Approximate Message Passing by Jeremy Vila and Philip Schniter.
- BMP: Robust 1-bit compressive sensing using binary matching pursuit by Ming Yan, Yi Yang, Stanley Osher
- ASPICS: Statistical physics-based reconstruction in compressed sensing by Florent Krzakala, Marc Mézard, François Sausset,Yifan Sun, Lenka Zdeborová, (C++, Python, now in Matlab)
- CPR: Compressive Phase Retrieval From Squared Output Measurements Via Semidefinite Programming by Henrik Ohlsson, Allen Y. Yang, Roy Dong, S. Shankar Sastry.
- Compressive MUSIC (with forward CSF, and backward CSF) Compressive MUSIC: A Missing Link between Compressive Sensing and Array Signal Processing by Jong Min Kim, Ok Kyun Lee, and
Jong Chul Ye .
- MIrror Prox Solver by
Anatoli Juditsky, Fatma Kilinc Karzan, Arkadi Nemirovski. Manual is here. Attendant paper is:
L1 Minimization via Randomized First Order Algorithms
- Bregman CookBook by
- SPIN Toolbox:
Signal Recovery on Incoherent Manifolds by Chinmay Hegde and Richard Baraniuk.
- Sparse Signal Reconstruction from Quantized Noisy Measurements via GEM Hard Thresholding by Kun. Qiu and Aleksandar Dogandžić.
- Split Bregman implementations by Tom Goldstein
- TV normed pursuits by Anastasios Kyrillidis, Gilles Puy, and Volkan Cevher.
- EM-GM-AMP: An Algorithm for Sparse Reconstruction: by Jeremy Vila and Phil Schniter
- Gradient Support Pursuit (GraSP) : Greedy Sparsity-Constrained Optimization by Sohail Bahmani, Petros Boufounos, and Bhiksha Raj.
- ProPPA: ProPPA: A Fast Algorithm for $\ell_1$ Minimization and Low-Rank Matrix Completion by Ranch Y.Q. Lai, Pong C. Yuen.
From now on, I will list the entries on the Nuit Blanche blog dedicated to the reconstruction solvers (One entry = one solver). They are all listed under the implementation tag, they are also listed in the Reproducible Research Page.
Last Updated July 2013
5. Hardware / Compressive Sensing Sensor implementations
Most of these entries are summarized in the Compressive Sensing Sensor / Hardware page. You should go to that page in order to see updated information. One should notice a difference between hardware that does not require modification but different operating procedures and totally new hardware. Hardware implementations vary according to what the hardware can already do. In the case of MRI, one is directly sampling in the Fourier Space and therefore it is directly amenable to subsampling in the Fourier world. Other instrumentations do not sample in the Fourier space and one is led to think of inventive schemes to do that. The most salient features of the random or noiselet measurements is that one can foresee a hardware implementation without having to know the decomposing sparse basis since there is a high probability that these two will be incoherent with each other.
Phase RetrievalSampling strategy: