<< Chapter < Page Chapter >> Page >

The weiner-filter, W opt R P , is ideal for many applications. But several issues must be addressed to use it in practice.

In practice one usually won't know exactly the statistics of x k and d k (i.e. R and P ) needed to compute the Weiner filter.

How do we surmount this problem?

Estimate the statistics r xx l 1 N k 0 N 1 x k x k + l r xd l 1 N k 0 N 1 d k x k - l then solve W opt R P

In many applications, the statistics of x k , d k vary slowly with time.

How does one develop an adaptive system which tracks these changes over time to keep the system nearoptimal at all times?

Use short-time windowed estiamtes of the correlation functions.

r xx l k 1 N m N 1 0 x k - m x k - m - l
r dx l k 1 N m N 1 0 x k - m - l d k - m and W opt k R k P k

How can r xx k l be computed efficiently?

Recursively! r xx k l r xx k - 1 l x k x k - l x k - N x k - N - l This is critically stable, so people usually do 1 r xx l k r xx k - 1 l x k x k - l

how does one choose N?

Tradeoffs

Larger N more accurate estimates of the correlation valuesbetter W opt . However, larger N leads to slower adaptation.

The success of adaptive systems depends on x , d being roughly stationary over at least N samples, N M . That is, all adaptive filtering algorithms require that the underlying system varies slowly withrespect to the sampling rate and the filter length (although they can tolerate occasional step discontinuities in theunderlying system).

Computational considerations

As presented here, an adaptive filter requires computing a matrix inverse at each sample. Actually, since the matrix R is Toeplitz, the linear system of equations can be sovled with O M 2 computations using Levinson's algorithm, where M is the filter length. However, in many applications this may be too expensive, especiallysince computing the filter output itself requires O M computations. There are two main approaches to resolving the computation problem

  • Take advantage of the fact that R k 1 is only slightly changed from R k to reduce the computation to O M ; these algorithms are called Fast Recursive Least Squareds algorithms; all methods proposed so farhave stability problems and are dangerous to use.
  • Find a different approach to solving the optimization problem that doesn't require explicit inversion of thecorrelation matrix.

Adaptive algorithms involving the correlation matrix are called Recursive least Squares (RLS) algorithms. Historically, they were developed after the LMSalgorithm, which is the slimplest and most widely used approach O M . O M 2 RLS algorithms are used in applications requiring very fast adaptation.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Fundamentals of signal processing(thu). OpenStax CNX. Aug 07, 2007 Download for free at http://cnx.org/content/col10446/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Fundamentals of signal processing(thu)' conversation and receive update notifications?

Ask