Abstracts‎ > ‎

Posters

Henry Arguello (University of Delaware), Restricted Isometry Property in Coded Aperture Compressive Spectral Imaging

Coded Aperture Snapshot Spectral Imaging Systems (CASSI) capture the spectral information of a scene using a set of coded focal plane array measurements. Compressed sensing reconstruction algorithms are used to reconstruct the underlying spectral 3D data cube. The coded measurements in CASSI use structured sensing matrices. This article describes the Restricted Isometry Property (RIP) for the projection matrices used in CASSI. In turn, the RIP provides guidelines for the minimum number of FPA measurement shots needed for image reconstruction. It also provides the optimal transmittance parameters for the set of code apertures used in the acquisition process.


Arnab Bhattacharyya (Rutgers University), Testing Assignments of Boolean CSPs

Given an instance I of a CSP, a tester for I distinguishes assignments satisfying I from those which are far from any assignment satisfying I. The efficiency of a tester is measured by its query complexity, the number of variable assignments queried by the algorithm. In this poster, we characterize the hardness of testing Boolean CSPs according to the relations used to form the constraints. In terms of computational complexity, we show that if a Boolean CSP is sublinear-query testable (resp., not sublinear-query testable), then the CSP is in NL (resp., P-complete, ParityL-complete or NP-complete), and that if a Boolean CSP is constant-query testable (resp., not constant-query testable), then counting the number of solutions of the CSP is in P (resp., #P-complete). The classification is done by showing an Omega(n) lower bound for testing Horn 3-SAT and investigating Post's lattice, the inclusion structure of Boolean algebras associated with CSPs.


Armin Eftekhari (Colorado School of Mines), The Restricted Isometry Property for Block Diagonal Matrices

In compressive sensing (CS), the restricted isometry property (RIP) is
a condition on measurement operators which ensures that robust
recovery of sparse vectors is possible from noisy, undersampled
measurements via computationally tractable algorithms. It is by now
well-known that Gaussian random matrices satisfy the RIP under certain
conditions on the number of measurements. Their use is limited in
practice, however, due to storage limitations, computational
considerations, or the mismatch of such matrices with certain
measurement architectures. These issues have recently motivated
considerable efforts towards studying RIP for structured random
matrices. In this poster, we study the RIP for block diagonal
measurement matrices where each block on the main diagonal is itself a
sub-Gaussian random matrix. Our main result states that such matrices
can indeed satisfy the RIP but that the requisite number of
measurements depends on certain properties of the basis in which the
signals are sparse. In the best case, these matrices perform nearly as
well as dense Gaussian random matrices despite having many fewer
nonzero entries.

Amin Jalali (University of Washington), Recovery of Simultaneously Sparse Signals from Random Linear Measurements 

(joint work with Samet Oymak, Maryam Fazel, Yonnina Eldar, Babak Hassibi)

Michael Kallitsis (University of Michigan), Fast Algorithms for Optimal Large-scale Network Monitoring
(joint work with Stilian Stoev and George Michailidis)

Modern high-speed computer networks provide the actual infrastructure for indispensable services
such as e-mail, web, and IP telephony, as well as for a host of rapidly growing new applications such
as cloud computing, social networking, audio/video streaming, etc. The robustness and integrity of the
network requires efficient tools for traffic monitoring, analysis, and prediction, which scale well with
traffic volume and network size. We address the problem of optimal large-scale monitoring of computer
networks under budget constraints. Specifically, we consider the task of selecting the “best” subset of at
most K links to monitor, so as to optimally predict the traffic loads at the remaining links in the network.
Our notion of optimality is quantified in terms of the statistical error of network kriging predictors
in the context of global network traffic models. The optimal monitoring problem is akin to certain
optimal statistical design problems. In the networking context, however, there are new combinatorial
constraints, which render the algorithms seeking the exact solution impractical. We develop a number
of fast algorithms that improve upon existing greedy algorithms in terms of computational complexity
and accuracy. Our algorithms exploit the geometry of principal component analysis, which also leads
us to new types of theoretical bounds on the prediction error. Finally, these algorithms are amenable to
randomization, where the best of several parallel independent instances often yields the exact optimal
solution in practice. Their performance is illustrated and evaluated on simulated and real-network traces.

Reut Levi (Tel Aviv University), Testing collections of distributions for similar means

We consider the problem of testing a basic property of collections of distributions: having similar means. Namely, the algorithm should accept collections of distributions in which all distributions have
means that do not differ by more than some given parameter, and should reject collections that are relatively far from having this property. By ‘far’ we mean that it is necessary to modify the distributions in a relatively significant manner (measured according to the $\ell_1$ distance averaged over the distributions)
so as to obtain the property. We study this problem in two models. In the first model (the query model) the algorithm may ask for samples from any distribution of its choice, and in the second model (the sampling
model) the distributions from which it gets samples are selected randomly. We provide upper and lower bounds in both models. In particular, in the query model, the complexity of the problem is polynomial in $1/\epsilon$ (where $\epsilon$ is the given distance parameter). While in the sampling model, the complexity grows roughly as $m^{1-\epsilon^2}$, where $m$ is the number of distributions.

Irene Manotas (Industrial University of Santander), Compressive Sensing Image Recovery Algorithms on Smartphone platform

Compressive Sensing (CS) is a remarkable framework that efficiently senses a signal using  a set of the random projections from the underlying signal. Random projections are then used by a reconstruction algorithm that recover the initial signal. CS had been recognized by its diverse applications in a broad of fields such as imaging, optics and communications. Extensive efforts have been realized to determine the minimum number of required random projections as well as to purpose efficient optimization algorithms for a correct and quality reconstruction. In practice, a huge number of matrix-vector operations are required to implement these reconstruction algorithms which impact directly their complexity and energy consumption. For that reason, CS techniques have been restricted to be implemented on high performance computational architectures, such as personal computers, servers or GPU, and little research have been done in the analysis and implementation of CS techniques on mobile handsets devices like cellphones, whose computational resources are limited. This work implements two CS recovery algorithms on a mobile handset device; the required power computation time and energy consumption to implement CS techniques in a mobile device with restricted memory and computational capacity are exhibit in this paper. The results shows the appropiate CS configuration and recovery algorithm to mobile handsets according to two basis: time and energy consumption.

Alejandro Weinstein (Colorado School of Mines), Joint Denoising of a signal ensemble: the power of veto

This work presents a technique to denoise a signal ensemble by
exploiting sparsity both at the inter and intra-signal level. The
problem of signal denoising using thresholding estimators has received a
lot of attention in the literature, starting in the 1990s when Donoho
and Johnstone introduced the concept of wavelet shrinkage. In this
approach, the signal is represented in a basis where it is sparse, and
each noisy coefficient is thresholded by a parameter that depends on the
noise level. We are extending this concept to a set of signals, under
the assumption that the signals have a common support. Our approach is
based on a vetoing mechanism, where in addition to thresholding, the
inter-signal information is used to "save" a coefficient that otherwise
would be "killed". Our method achieve a smaller risk than the
independent denoising, and we quantify the expected value of this
improvement. The results show a consistent improvement over the
independent denoising, achieving results close to the ones produced by
an oracle. We validate the technique using both synthetic and real world
signals.

Erez Weisbard (Bar Ilan University), Efficient signatures for Network Coding

Network coding helps maximize the network throughput. However, such
schemes are also vulnerable to pollution attacks in which malicious
forwarders inject polluted messages into the system. Traditional
cryptographic solutions such as digital signatures are not suited for
network coding in which nodes do not forward the original packets, but
rather linear combinations of the data they receive. We describe a
secure scheme that uses batch techniques and selective verification to
efficiently verify the integrity of the received packets. For real
peer-to-peer networks, our scheme is much more efficient than
previously suggested schemes.
Comments