<< Chapter < Page | Chapter >> Page > |
It is important to analyze the LMS algorithm to determine under what conditions it is stable, whether or not it convergesto the Wiener solution, to determine how quickly it converges, how much degredation is suffered due to the noisy gradient,etc. In particular, we need to know how to choose the parameter .
does , approach the Wiener solution? (since is always somewhat random in the approximate gradient-based LMS algorithm, we ask whether the expectedvalue of the filter coefficients converge to the Wiener solution)
and , and , and and are statistically independent, . This assumption is obviously false, since is the same as except for shifting down the vector elements one place and adding one new sample. We make this assumptionbecause otherwise it becomes extremely difficult to analyze the LMS algorithm. (First good analysis not makingthis assumption: Macchi and Eweda ) Many simulations and much practical experience has shown that the results one obtains withanalyses based on the patently false assumption above are quite accurate in most situations
With the independence assumption, (which depends only on previous , ) is statitically independent of , and we can simplify
Now is a vector, and
Putting this back into our equation
If converges, then as , , and or the Wiener solution!
So the LMS algorithm, if it converges, gives filter coefficients which on average arethe Wiener coefficients! This is, of course, a desirable result.
But does converge, or under what conditions?
Let's rewrite the analysis in term of , the "mean coefficient error vector" , where is the Wiener filter Now , so We wish to know under what conditions ?
Since is positive definite, real, and symmetric, all the eigenvalues arereal and positive. Also, we can write as , where is a diagonal matrix with diagonal entries equal to the eigenvalues of , and is a unitary matrix with rows equal to the eigenvectors corresponding to theeigenvalues of .
Using this fact, multiplying both sides through on the left by : we get Let : Note that is simply in a rotated coordinate set in , so convergence of implies convergence of .
Since is diagonal, all elements of evolve independently of each other. Convergence (stability) bolis down to whether all of these scalar, first-order difference equations are stable, and thus . These equations converge to zero if , or and are positive, so we require so for convergence in the mean of the LMS adaptive filter, we require
For a correlation matrix, . So . We can easily estimate with computations/sample, so in practice we might require as a conservative bound, and perhaps adapt accordingly with time.
Each of the modes decays as
Notification Switch
Would you like to follow the 'Adaptive filters' conversation and receive update notifications?