<< Chapter < Page Chapter >> Page >

In this section, we consider only real-valued wavelet functions that form an orthogonal basis, hence ϕ ϕ ˜ and ψ ψ ˜ . We saw in Orthogonal Bases from Multiresolution analysis and wavelets how a given function belonging to L 2 ( R ) could be represented as a wavelet series. Here, we explain how to use a wavelet basis to construct a nonparametric estimator for the regression function m in the model

Y i = m ( x i ) + ϵ i , i = 1 , ... , n , n = 2 J , J N ,

where x i = i n are equispaced design points and the errors are i.i.d. Gaussian, ϵ i N ( 0 , σ ϵ 2 ) .

A wavelet estimator can be linear or nonlinear . The linear wavelet estimator proceeds by projecting the data onto a coarse level space. This estimator is of a kernel-type, see "Linear smoothing with wavelets" . Another possibility for estimating m is to detect which detail coefficients convey the important information about the function m and to put equal to zero all the other coefficients. This yields a nonlinear wavelet estimator as described in "Nonlinear smoothing with wavelets" .

Linear smoothing with wavelets

Suppose we are given data ( x i , Y i ) i = 1 n coming from the model [link] and an orthogonal wavelet basis generated by { ϕ , ψ } . The linear wavelet estimator proceeds by choosing a cutting level j 1 and represents an estimation of the projection of m onto the space V j 1 :

m ^ ( x ) = k = 0 2 j 0 - 1 s ^ j 0 , k ϕ j 0 , k ( x ) + j = j 0 j 1 - 1 k = 0 2 j - 1 d ^ j , k ψ j , k ( x ) = k s ^ j 1 , k ϕ j 1 , k ( x ) ,

with j 0 the coarsest level in the decomposition, and where the so-called empirical coefficients are computed as

s ^ j , k = 1 n i = 1 n Y i ϕ j k ( x i ) and d ^ j , k = 1 n i = 1 n Y i ψ j k ( x i ) .

The cutting level j 1 plays the role of a smoothing parameter: a small value of j 1 means that many detail coefficients are left out, and this may lead to oversmoothing. On the other hand, if j 1 is too large, too many coefficients will be kept, and some artificial bumps will probably remain in the estimation of m ( x ) .

To see that the estimator [link] is of a kernel-type, consider first the projection of m onto V j 1 :

P V j 1 m ( x ) = k m ( y ) ϕ j 1 , k ( y ) d y ϕ j 1 , k ( x ) = K j 1 ( x , y ) m ( y ) d y ,

where the (convolution) kernel K j 1 ( x , y ) is given by

K j 1 ( x , y ) = k ϕ j 1 , k ( y ) ϕ j 1 , k ( x ) .

Härdle et al. [link] studied the approximation properties of this projection operator. In order to estimate [link] , Antoniadis et al. [link] proposed to take:

P V j 1 ^ m ( x ) = i = 1 n Y i ( i - 1 ) / n i / n K j 1 ( x , y ) d y = k i = 1 n Y i ( i - 1 ) / n i / n ϕ j 1 , k ( y ) d y ϕ j 1 , k ( x ) .

Approximating the last integral by 1 n ϕ j 1 , k ( x i ) , we find back the estimator m ^ ( x ) in [link] .

By orthogonality of the wavelet transform and Parseval's equality, the L 2 - risk (or integrated mean square error IMSE) of a linear wavelet estimator is equal to the l 2 - risk of its wavelet coefficients:

IMSE = E m ^ - m L 2 2 = k E [ s ^ j 0 , k - s j 0 , k ] 2 + j = j 0 j 1 - 1 k E [ d ^ j k - d j k ] 2 + j = j 1 k d j k 2 = S 1 + S 2 + S 3 ,

where

s j k : = m , ϕ j k and d j k = m , ψ j k

are called `theoretical' coefficients in the regression context. The term S 1 + S 2 in [link] constitutes the stochastic bias whereas S 3 is the deterministic bias. The optimal cutting level is such that these two bias are of the same order. If m is β - Hölder continuous, it is easy to see that the optimal cutting level is j 1 ( n ) = O ( n 1 / ( 1 + 2 β ) ) . The resulting optimal IMSE is of order n - 2 β 2 β + 1 . In practice, cross-validation methods are often used to determine the optimal level j 1 [link] , [link] .

Questions & Answers

what are components of cells
ofosola Reply
twugzfisfjxxkvdsifgfuy7 it
Sami
58214993
Sami
what is a salt
John
the difference between male and female reproduction
John
what is computed
IBRAHIM Reply
what is biology
IBRAHIM
what is the full meaning of biology
IBRAHIM
what is biology
Jeneba
what is cell
Kuot
425844168
Sami
what is biology
Inenevwo
what is cytoplasm
Emmanuel Reply
structure of an animal cell
Arrey Reply
what happens when the eustachian tube is blocked
Puseletso Reply
what's atoms
Achol Reply
discuss how the following factors such as predation risk, competition and habitat structure influence animal's foraging behavior in essay form
Burnet Reply
cell?
Kuot
location of cervical vertebra
KENNEDY Reply
What are acid
Sheriff Reply
define biology infour way
Happiness Reply
What are types of cell
Nansoh Reply
how can I get this book
Gatyin Reply
what is lump
Chineye Reply
what is cell
Maluak Reply
what is biology
Maluak
what is vertibrate
Jeneba
what's cornea?
Majak Reply
what are cell
Achol
Got questions? Join the online conversation and get instant answers!
Jobilize.com Reply

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, An introduction to wavelet analysis. OpenStax CNX. Sep 14, 2009 Download for free at http://cnx.org/content/col10566/1.3
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'An introduction to wavelet analysis' conversation and receive update notifications?

Ask