<< Chapter < Page | Chapter >> Page > |
where tr denotes the trace of . For the derivation we have used, the fact that and consequently the have zero mean. Notice that for orthogonal the Eq. [link] immediately specializes to Eq. [link] . Eq. [link] depends on the particular signal , the transform, , and the noise level .
It can be shown that when using the SIDWT introduced above and the thresholding scheme proposed by Donoho (including his choice ofthe threshold) then there exists the same upper bound for the actual risk as for case of the orthogonal DWT. That is theideal risk times a logarithmic (in ) factor. We give only an outline of the proof. Johnstone and Silvermanstate [link] that for colored noise an oracle chooses , where is the standard deviation of the th component. Since Donoho's method applies uniform thresholdingto all components, one has to show that the diagonal elements of (the variances of the components of ) are identical. This can be shown by considering the reconstructionscheme of the SIDWT. With these statements, the rest of the proof can be carried out in the same way as the one given byDonoho and Johnstone [link] .
The two examples illustrated in [link] show how wavelet based denoising works. The first shows a chirp or doppler signalwhich has a changing frequency and amplitude. Noise is added to this chirp in (b) and the result of basic Donoho denoising is shown in(c) and of redundant DWT denoising in (d). First, notice how well the noise is removed and at almost no sacrifice in the signal.This would be impossible with traditional linear filters.
The second example is the Houston skyline where the improvement of the redundant DWT is more obvious.
This problem is very similar to the signal recovery problem; a signal has to be estimated from additive whiteGaussian noise. By linearity, additive noise is additive in the transform domain where the problem becomes:estimate from , where is a noise vector (with each component being a zero mean variance one Gaussian randomvariable) and is a scalar noise level. The performance measured by the mean squared error (by Parseval)is given by
It depends on the signal ( ), the estimator , the noise level , and the basis.
For a fixed , the optimal minmax procedure is the one that minimizes the error for the worst possiblesignal from the coefficient body .
Consider the particular nonlinear procedure that corresponds to soft-thresholding of every noisy coefficient :
Let be the corresponding error for signal and let be the worst-case error for the coefficient body .
If the coefficient body is solid, orthosymmetric in a particular basis, then asymptotically ( ) the error decays at least as fast in this basis as in any other basis. That is approaches zero at least as fast as for any orthogonal matrix . Therefore, unconditional bases are nearly optimal asymptotically. Moreover, for small we can relate this procedure to any other procedure as follows [link] :
Notification Switch
Would you like to follow the 'Wavelets and wavelet transforms' conversation and receive update notifications?