<< Chapter < Page | Chapter >> Page > |
The new nonlinear method is entirely different. The spectra can overlap as much as they want. The idea is to have the amplitude, rather than thelocation of the spectra be as different as possible. This allows clipping, thresholding, and shrinking of the amplitude of the transform toseparate signals or remove noise. It is the localizing or concentrating properties of the wavelet transform that makes it particularly effectivewhen used with these nonlinear methods. Usually the same properties that make a system good for denoising or separation by nonlinear methods, makesit good for compression, which is also a nonlinear process.
We develop the basic ideas of thresholding the wavelet transform using Donoho's formulations [link] , [link] , [link] . Assume a finite length signal with additive noise of the form
as a finite length signal of observations of the signal that is corrupted by i.i.d. zero mean, white Gaussian noise with standard deviation , i.e., . The goal is to recover the signal from the noisy observations . Here and in the following, denotes a vector with the ordered elements if the index is omitted. Let be a left invertible wavelet transformation matrix of the discrete wavelet transform(DWT). Then Eq. [link] can be written in the transformation domain
where capital letters denote variables in the transform domain, i.e., . Then the inverse transform matrix exists, and we have
The following presentation follows Donoho's approach [link] , [link] , [link] , [link] , [link] that assumes an orthogonal wavelet transform with a square ; i.e., . We will use the same assumption throughout this section.
Let denote an estimate of , based on the observations . We consider diagonal linear projections
which give rise to the estimate
The estimate is obtained by simply keeping or zeroing the individual wavelet coefficients. Since we are interested inthe error we define the risk measure
Notice that the last equality in Eq. [link] is a consequence of the orthogonality of . The optimal coefficients in the diagonal projection scheme are ; It is interesting to note that allowing arbitrary improves the ideal risk by at most a factor of 2 [link] i.e., only those values of where the corresponding elements of are larger than are kept, all others are set to zero. This leads to the ideal risk
The ideal risk cannot be attained in practice, since it requires knowledge of , the wavelet transform of the unknown vector . However, it does give us a lower limit for the error.
Donoho proposes the following scheme for denoising:
This simple scheme has several interesting properties. It's risk is within a logarithmic factor ( ) of the ideal risk for both thresholding schemes and properly chosen thresholds . If one employs soft thresholding, then the estimate is with high probability at least assmooth as the original function. The proof of this proposition relies on the fact that wavelets are unconditional bases for a variety of smoothnessclasses and that soft thresholding guarantees (with high probability) that the shrinkage condition holds. The shrinkage condition guarantees that is in the same smoothness class as is . Moreover, the soft threshold estimate is the optimal estimate that satisfies the shrinkage condition. The smoothness property guarantees anestimate free from spurious oscillations which may result from hard thresholding or Fourier methods. Also, it can be shown that it is notpossible to come closer to the ideal risk than within a factor . Not only does Donoho's method have nice theoretical properties, but italso works very well in practice.
Notification Switch
Would you like to follow the 'Wavelets and wavelet transforms' conversation and receive update notifications?