<< Chapter < Page | Chapter >> Page > |
In Nonlinear Approximation from Approximation , we measured the quality of a dictionary in terms of its -term approximations to signals drawn from some class. One reason that such approximations are desirableis that they provide concise descriptions of the signal that can be easily stored, processed, etc. There is even speculation andevidence that neurons in the human visual system may use sparse coding to represent a scene [link] .
For data compression, conciseness is often exploited in a popular technique known as transform coding . Given a signal (for which a concise description may not be readily apparent in itsnative domain), the idea is simply to use the dictionary to transform to its coefficients , which can then be efficiently and easily described. As discussed above, perhaps thesimplest strategy for summarizing a sparse is simply to threshold, keeping the largest coefficients and discarding the rest. A simple encoder would then just encode thepositions and quantized values of these coefficients.
Suppose is a function and let be an approximation to encoded using bits. To evaluate the quality of a coding strategy, it is common to consider the asymptotic rate-distortion (R-D) performance, which measures the decay rate of as . The metric entropy [link] for a class gives the best decay rate that can be achieved uniformly over all functions . We note that this is a true measure for the complexity of a class and is tied to noparticular dictionary or encoding strategy. The metric entropy also has a very geometric interpretation, as it relates to thesmallest radius possible for a covering of balls over the set .
Metric entropies are known for certain signal classes. For example, the results of Clements [link] (extending those of Kolmogorov and Tihomirov [link] ) regarding metric entropy give bounds on the optimal achievable asymptoticrate-distortion performance for -dimensional -smooth functions (see also [link] ):
In some cases, however, the sparsity of the wavelet transform may not reflect the true underlying structure of a signal. Examplesare 2-D piecewise smooth signals with a smooth edge discontinuity separating the smooth regions. As we discussed in Nonlinear Approximation from Approximation , wavelets fail to sparsely represent these functions, and so the R-D performance for simplethresholding-based coders will suffer as well. In spite of all of the benefits of wavelet representations for signal processing (lowcomputational complexity, tree structure, sparse approximations for smooth signals), this failure to efficiently represent edgesis a significant drawback. In many images, edges carry some of the most prominent and important information [link] , and so it is desirable to have a representation well-suited tocompressing edges in images.
To address this concern, recent work in harmonic analysis has focused on developing representations that provide sparsedecompositions for certain geometric image classes. Examples include curvelets [link] , [link] and contourlets [link] , slightly redundant tight frames consisting of anisotropic, “needle-like” atoms.In [link] , bandelets are formed by warping an orthonormal wavelet basis to conform to the geometrical structurein the image. A nonlinear multiscale transform that adapts to discontinuities (and can represent a “clean” edge using very fewcoarse scale coefficients) is proposed in [link] . Each of these new representations has been shown to achievenear-optimal asymptotic approximation and R-D performance for piecewise smooth images consisting of regions separated by discontinuities along curves, with ( for bandelets). Some have also found use in specialized compression applications such asidentification photos [link] .
In [link] , we have presented a scheme that is based on the simple yet powerful observation that geometric features can be efficientlyapproximated using local, geometric atoms in the spatial domain, and that the projection of these geometric primitives onto waveletsubspaces can therefore approximate the corresponding wavelet coefficients. We prove that the resulting dictionary achieves theoptimal nonlinear approximation rates for piecewise smooth signal classes. To account for the added complexity of this encodingstrategy, we also consider R-D results and prove that this scheme comes within a logarithmic factor of the optimal performance rate.Unlike the techniques mentioned above, our method also generalizes to arbitrary orders of smoothness and arbitrary signal dimension.
Notification Switch
Would you like to follow the 'Concise signal models' conversation and receive update notifications?