The Kalman filter is just one of many
adaptive filtering (or estimation)
algorithms. Despite its elegant derivation and often excellentperformance, the Kalman filter has two drawbacks:
The derivation and hence performance of the Kalman
filter depends on the accuracy of the
a
priori assumptions. The performance can be less than
impressive if the assumptions are erroneous.
The Kalman filter is fairly computationally demanding,
requiring
operations per sample. This can limit the utility
of Kalman filters in high rate real timeapplications.
As a popular alternative to the Kalman filter, we willinvestigate the so-called
least-mean-square (LMS)
adaptive filtering algorithm.
The principle advantages of LMS are
No prior assumptions are made regarding the signal to be
estimated.
Computationally, LMS is very efficient, requiring
per sample.
The price we pay with LMS instead of a Kalman filter is that the
rate of convergence and adaptation to sudden changes is slower forLMS than for the Kalman filter (with correct prior assumptions).
Adaptive filtering applications
Channel/system identification
Noise cancellation
Suppression of maternal ECG component in
fetal ECG (
).
is an estimate of the maternal ECG signal present
in the abdominal signal (
).
Channel equalization
Adaptive controller
Iterative minimization
Most adaptive filtering alogrithms (LMS
included) are modifications of standard iterative proceduresfor solving minimization problems in a
real-time or
on-line fashion. Therefore, before deriving
the LMS algorithm we will look at iterative methods ofminimizing error criteria such as MSE.
Conider the following set-up:
Linear estimator
Impulse response of the filter:
Vector notation
Where
and
Error signal
Assumptions
are jointly stationary with zero-mean.
Mse
Where
is the variance of
,
is the covariance matrix of
, and
is the cross-covariance between
and
The MSE is quadratic in
which implies the MSE
surface is "bowl" shaped with a unique minimum point (
).
Optimum filter
Minimize MSE:
Notice that we can re-write
as
or
Which shows that the error signal is orthogonal to the input
(by the
orthogonality principle of
minimum MSE estimator).
Steepest descent
Although we can easily determine
by solving the system of equations
Let's look at an iterative procedure for solving this
problem. This will set the stage for our adaptive filteringalgorithm.
We want to minimize the MSE. The idea is
simple. Starting at some initial weight vector
, iteratively adjust the values to decrease the
MSE (
).
We want to
move
towards the optimal vector
. In order to move in the correct direction, we
must move
downhill or in the direction opposite
to the gradient of the MSE surface at the point
. Thus, a natural and simple adjustment takes the form
Where
is the step size and
tells us how far to move in the negative gradient direction(
).
Generalizing this idea to an iterative strategy, we get
and we can repeatedly update
:
. Hopefully each subsequent
is closer to
. Does the procedure converge? Can we adapt it to
an on-line, real-time, dynamic situation in which thesignals may not be stationary?