<< Chapter < Page | Chapter >> Page > |
A new theory known as Compressed Sensing (CS) has recently emerged that can also be categorized as a type of dimensionalityreduction. Like manifold learning, CS is strongly model-based (relying on sparsity in particular).However, unlike many of the standard techniques in dimensionality reduction (such as manifold learning or the JL lemma), the goal ofCS is to maintain a low-dimensional representation of a signal from which a faithful approximation to can be recovered. In a sense, this more closely resembles the traditional problem ofdata compression (see Compression ). In CS, however, the encoder requires no a priori knowledge of thesignal structure. Only the decoder uses the model (sparsity) to recover the signal. Wejustify such an approach again using geometric arguments.
Consider a signal , and suppose that the basis provides a -sparse representation of
As we discussed in Compression , the standard procedure for compressing sparse signals, known as transformcoding, is to (i) acquire the full -sample signal ; (ii) compute the complete set of transform coefficients ; (iii) locate the largest, significant coefficients and discard the (many) small coefficients; (iv) encode the values and locations of the largest coefficients.
This procedure has three inherent inefficiencies: First, for a high-dimensional signal, we must start with a large number ofsamples . Second, the encoder must compute all of the transform coefficients , even though it will discard all but of them. Third, the encoder must encode the locations of the large coefficients, which requiresincreasing the coding rate since the locations change with each signal.
This raises a simple question: For a given signal, is it possible to directly estimate the set of large 's that will not be discarded? While this seems improbable, Candès, Romberg,and Tao [link] , [link] and Donoho [link] have shown that a reduced set of projections can contain enoughinformation to reconstruct sparse signals. An offshoot of this work, often referred to as Compressed Sensing (CS) [link] , [link] , [link] , [link] , [link] , [link] , [link] , has emerged that builds on this principle.
In CS, we do not measure or encode the significant directly. Rather, we measure and encode projections of the signal onto a second set of functions . In matrix notation, we measure
Notification Switch
Would you like to follow the 'Concise signal models' conversation and receive update notifications?