<< Chapter < Page | Chapter >> Page > |
Let be the representation of in an orthonormal basis . An approximation must be recovered from
A basis of singular vectors diagonalizes . Then U transforms a subset of Q vectors of into an orthogonal basis of ImU and sets all other vectors to zero. A singular value decomposition estimates the coefficients of by projecting Y on this singular basis and by renormalizing the resultingcoefficients
where h m 2 are regularization parameters.
Such estimators recover nonzero coefficients in a space of dimension Q and thus bring no super-resolution. If U is a convolution operator, then is the Fourier basis and a singular value estimationimplements a regularized inverseconvolution.
The basis that diagonalizes rarely provides a sparse signal representation.For example, a Fourier basis that diagonalizes convolution operators does notefficiently approximate signals including singularities.
Donoho (Donoho:95) introduced more flexibility by looking for a basis providing a sparse signal representation, where a subset of Q vectors are transformed by U in a Riesz basis of ImU , while the others are set to zero. With an appropriate renormalization, has a biorthogonal basis that is normalized . The sparse coefficients of in can then be estimated with a thresholding
for thresholds T m appropriately defined.
For classes of signals that are sparse in , such thresholding estimators mayyield a nearly minimax risk, but they provide no super-resolution since this nonlinear projector remains in a space of dimension Q . This result applies to classes of convolution operators U in wavelet or wavelet packet bases. Diagonal inverse estimators are computationally efficient andpotentially optimal in cases where super-resolution is not possible.
Suppose that has a sparse representation in some dictionary of P normalized vectors. The P vectors of the transformed dictionary belong to the space ImU of dimension and thus define a redundant dictionary. Vectors in the approximation support λ of are not restricted a priori to a particular subspace of . Super-resolution is possible if the approximation support λ of in can be estimated by decomposing the noisy data Y over D U . It dependson the properties of the approximation support λ of in γ .
Let be the approximation error of a sparse representation of . The observed signal can be written as
If the support λ can be identified by finding a sparse approximation of Y in D U
then we can recover a super-resolution estimation of
This shows that super-resolution is possible if the approximation support λ can be identified by decomposing Y in the redundant transformed dictionary D U . If the exact recovery criteria is satisfy and if is a Riesz basis, then λ can be recovered using pursuit algorithms with controlled error bounds.
For most operator U , not all sparse approximation sets can be recovered. It is necessary to impose some further geometric conditions on λ in γ , which makes super-resolution difficult and often unstable. Numerical applications to sparse spike deconvolution, tomography, super-resolutionzooming, and inpainting illustrate these results.
Candès and Tao (candes-near-optimal), and Donoho (donoho-cs) proved that stable super-resolution is possible for anysufficiently sparse signal if U is an operator with random coefficients. Compressive sensing then becomespossible by recovering a close approximation of from linear measurements (candes-cs-review).
A recovery is stable for a sparse approximation set only if the corresponding dictionary family is a Riesz basis of the space it generates. The M-restricted isometry conditions of Candès, Tao, and Donoho (donoho-cs) imposes uniform Riesz bounds for all sets with :
This is a strong incoherence condition on the P vectors of , which supposes that any subset of less than M vectors is nearly uniformly distributed on the unit sphere of ImU .
For an orthogonal basis , this is possible for if U is a matrix with independent Gaussian random coefficients. A pursuit algorithm thenprovides a stable approximation of any having a sparse approximation from vectors in .
These results open a new compressive-sensing approach to signal acquisition and representation.Instead of first discretizing linearly the signal at a high-resolution N and then computing a nonlinear representation over M coefficients in some dictionary, compressive-sensing measures directly M randomized linear coefficients. A reconstructed signal is then recovered by a nonlinearalgorithm, producing an error that can be of the same order of magnitude as the error obtained by the more classic two-step approximation process,with a more economic acquisiton process. These results remain valid for several types of random matrices U . Examples of applications to single-pixel cameras,video super-resolution, new analog-to-digital converters, and MRI imaging are described.
Sparsity in redundant dictionaries also provides efficient strategies to separate a family of signals that are linearly mixed in observed signals with noise:
From a stereo recording, separating the sounds of S musical instruments is an example of source separation with . Most often the mixing matrix is unknown. Source separation is a super-resolution problem since data values must be recovered from measurements. Not knowing the operator U makes it even more complicated.
If each source has a sparse approximation support λ s in a dictionary , with , then it is likely that the sets are nearly disjoint. In this case,the operator U , the supports λ s , and the sources are approximated by computing sparse approximations of the observed data Y k in . The distribution of these coefficients identifies the coefficients of the mixingmatrix U and the nearly disjoint source supports. Time-frequency separation of sounds illustrate these results.
Notification Switch
Would you like to follow the 'A wavelet tour of signal processing, the sparse way' conversation and receive update notifications?