Abstracts‎ > ‎

### Posters

 Henry Arguello (University of Delaware), Restricted Isometry Property in Coded Aperture Compressive Spectral ImagingCoded Aperture Snapshot Spectral Imaging Systems (CASSI) capture the spectral information of a scene using a set of coded focal plane array measurements. Compressed sensing reconstruction algorithms are used to reconstruct the underlying spectral 3D data cube. The coded measurements in CASSI use structured sensing matrices. This article describes the Restricted Isometry Property (RIP) for the projection matrices used in CASSI. In turn, the RIP provides guidelines for the minimum number of FPA measurement shots needed for image reconstruction. It also provides the optimal transmittance parameters for the set of code apertures used in the acquisition process.Arnab Bhattacharyya (Rutgers University), Testing Assignments of Boolean CSPsGiven an instance I of a CSP, a tester for I distinguishes assignments satisfying I from those which are far from any assignment satisfying I. The efficiency of a tester is measured by its query complexity, the number of variable assignments queried by the algorithm. In this poster, we characterize the hardness of testing Boolean CSPs according to the relations used to form the constraints. In terms of computational complexity, we show that if a Boolean CSP is sublinear-query testable (resp., not sublinear-query testable), then the CSP is in NL (resp., P-complete, ParityL-complete or NP-complete), and that if a Boolean CSP is constant-query testable (resp., not constant-query testable), then counting the number of solutions of the CSP is in P (resp., #P-complete). The classification is done by showing an Omega(n) lower bound for testing Horn 3-SAT and investigating Post's lattice, the inclusion structure of Boolean algebras associated with CSPs.Armin Eftekhari (Colorado School of Mines), The Restricted Isometry Property for Block Diagonal MatricesIn compressive sensing (CS), the restricted isometry property (RIP) isa condition on measurement operators which ensures that robustrecovery of sparse vectors is possible from noisy, undersampledmeasurements via computationally tractable algorithms. It is by nowwell-known that Gaussian random matrices satisfy the RIP under certainconditions on the number of measurements. Their use is limited inpractice, however, due to storage limitations, computationalconsiderations, or the mismatch of such matrices with certainmeasurement architectures. These issues have recently motivatedconsiderable efforts towards studying RIP for structured randommatrices. In this poster, we study the RIP for block diagonalmeasurement matrices where each block on the main diagonal is itself asub-Gaussian random matrix. Our main result states that such matricescan indeed satisfy the RIP but that the requisite number ofmeasurements depends on certain properties of the basis in which thesignals are sparse. In the best case, these matrices perform nearly aswell as dense Gaussian random matrices despite having many fewernonzero entries.Amin Jalali (University of Washington), Recovery of Simultaneously Sparse Signals from Random Linear Measurements (joint work with Samet Oymak, Maryam Fazel, Yonnina Eldar, Babak Hassibi)Michael Kallitsis (University of Michigan), Fast Algorithms for Optimal Large-scale Network Monitoring(joint work with Stilian Stoev and George Michailidis)Modern high-speed computer networks provide the actual infrastructure for indispensable servicessuch as e-mail, web, and IP telephony, as well as for a host of rapidly growing new applications suchas cloud computing, social networking, audio/video streaming, etc. The robustness and integrity of thenetwork requires efficient tools for traffic monitoring, analysis, and prediction, which scale well withtraffic volume and network size. We address the problem of optimal large-scale monitoring of computernetworks under budget constraints. Specifically, we consider the task of selecting the “best” subset of atmost K links to monitor, so as to optimally predict the traffic loads at the remaining links in the network.Our notion of optimality is quantified in terms of the statistical error of network kriging predictorsin the context of global network traffic models. The optimal monitoring problem is akin to certainoptimal statistical design problems. In the networking context, however, there are new combinatorialconstraints, which render the algorithms seeking the exact solution impractical. We develop a numberof fast algorithms that improve upon existing greedy algorithms in terms of computational complexityand accuracy. Our algorithms exploit the geometry of principal component analysis, which also leadsus to new types of theoretical bounds on the prediction error. Finally, these algorithms are amenable torandomization, where the best of several parallel independent instances often yields the exact optimalsolution in practice. Their performance is illustrated and evaluated on simulated and real-network traces.Reut Levi (Tel Aviv University), Testing collections of distributions for similar meansWe consider the problem of testing a basic property of collections of distributions: having similar means. Namely, the algorithm should accept collections of distributions in which all distributions havemeans that do not differ by more than some given parameter, and should reject collections that are relatively far from having this property. By ‘far’ we mean that it is necessary to modify the distributions in a relatively signiﬁcant manner (measured according to the $\ell_1$ distance averaged over the distributions)so as to obtain the property. We study this problem in two models. In the ﬁrst model (the query model) the algorithm may ask for samples from any distribution of its choice, and in the second model (the samplingmodel) the distributions from which it gets samples are selected randomly. We provide upper and lower bounds in both models. In particular, in the query model, the complexity of the problem is polynomial in $1/\epsilon$ (where $\epsilon$ is the given distance parameter). While in the sampling model, the complexity grows roughly as $m^{1-\epsilon^2}$, where $m$ is the number of distributions.Irene Manotas (Industrial University of Santander), Compressive Sensing Image Recovery Algorithms on Smartphone platformCompressive Sensing (CS) is a remarkable framework that efficiently senses a signal using  a set of the random projections from the underlying signal. Random projections are then used by a reconstruction algorithm that recover the initial signal. CS had been recognized by its diverse applications in a broad of fields such as imaging, optics and communications. Extensive efforts have been realized to determine the minimum number of required random projections as well as to purpose efficient optimization algorithms for a correct and quality reconstruction. In practice, a huge number of matrix-vector operations are required to implement these reconstruction algorithms which impact directly their complexity and energy consumption. For that reason, CS techniques have been restricted to be implemented on high performance computational architectures, such as personal computers, servers or GPU, and little research have been done in the analysis and implementation of CS techniques on mobile handsets devices like cellphones, whose computational resources are limited. This work implements two CS recovery algorithms on a mobile handset device; the required power computation time and energy consumption to implement CS techniques in a mobile device with restricted memory and computational capacity are exhibit in this paper. The results shows the appropiate CS configuration and recovery algorithm to mobile handsets according to two basis: time and energy consumption.Alejandro Weinstein (Colorado School of Mines), Joint Denoising of a signal ensemble: the power of vetoThis work presents a technique to denoise a signal ensemble byexploiting sparsity both at the inter and intra-signal level. Theproblem of signal denoising using thresholding estimators has received alot of attention in the literature, starting in the 1990s when Donohoand Johnstone introduced the concept of wavelet shrinkage. In thisapproach, the signal is represented in a basis where it is sparse, andeach noisy coefficient is thresholded by a parameter that depends on thenoise level. We are extending this concept to a set of signals, underthe assumption that the signals have a common support. Our approach isbased on a vetoing mechanism, where in addition to thresholding, theinter-signal information is used to "save" a coefficient that otherwisewould be "killed". Our method achieve a smaller risk than theindependent denoising, and we quantify the expected value of thisimprovement. The results show a consistent improvement over theindependent denoising, achieving results close to the ones produced byan oracle. We validate the technique using both synthetic and real worldsignals.Erez Weisbard (Bar Ilan University), Efficient signatures for Network CodingNetwork coding helps maximize the network throughput. However, suchschemes are also vulnerable to pollution attacks in which maliciousforwarders inject polluted messages into the system. Traditionalcryptographic solutions such as digital signatures are not suited fornetwork coding in which nodes do not forward the original packets, butrather linear combinations of the data they receive. We describe asecure scheme that uses batch techniques and selective verification toefficiently verify the integrity of the received packets. For realpeer-to-peer networks, our scheme is much more efficient thanpreviously suggested schemes.