What does this have to do with
linear-phase filter design?
It's the same problem! To show that,
consider an odd-length, symmetric linear phase filter.
Where
.
Using trigonometric identities (such as
), we can rewrite
as
where the
are related to the
by a linear transformation. Now, let
. This is a one-to-one mapping from
onto
.
Thus
is an
th-order polynomial in
!
The alternation theorem holds for
the
filter design problem, too!
Therefore, to determine whether or not a
length-
,
odd-length, symmetric linear-phase filter is optimal in an
sense, simply count the alternations in
in the
pass and stop bands.If there are
or more alternations,
,
is the optimal filter!
Optimality conditions for even-length symmetric
linear-phase filters
For
even,
where
Using the trigonometric identity
to pull out the
term and then using the
other
trig identities , it can be shown that
can be written as
Again, this is a polynomial in
, except for a weighting function out in front.
which implies
where
and
Again, this is a polynomial approximation problem, so the
alternation theorem holds. If
has at least
alternations, the even-length symmetric filter is
optimal in an
sense.
The prototypical filter design problem:
See
.
L-∞ optimal lowpass filter design lemma
The maximum possible number of alternations for a
lowpass filter is
: The proof is that the extrema of a polynomial
occur only where the derivative is zero:
. Since
is an
th-order polynomial, it can have at
most
zeros.
However , the mapping
implies that
at
and
, for two more possible alternation
points.
Finally , the band edges can
also be alternations, for a total of
possible alternations.
There must be an alternation at either
or
.
Alternations must occur at
and
. See
.
The filter must be equiripple except at possibly
or
. Again see
.
The alternation theorem doesn't directly suggest a
method for computing the optimal filter. It simply tells ushow to recognize that a filter
is optimal, or
isn't optimal. What we need
is an intelligent way of guessing the optimal filtercoefficients.
In matrix form, these
simultaneous equations become
or
So, for the given set of
extremal frequencies, we can solve for
and
via
. Using the FFT, we can compute
of
, on a dense set of frequencies. If the old
are, in fact the extremal locations of
, then the alternation theorem is satisfied and
is
optimal . If not, repeat the process
with the new extremal locations.
Computational cost
for the matrix inverse and
for the FFT (
, typically),
per
iteration !
This method is expensive computationally due
to the matrix inverse.
A more efficient variation of this method was
developed by Parks and McClellan (1972), and is based on theRemez exchange algorithm. To understand the Remez exchange
algorithm, we first need to understand
Lagrange Interpoloation .
Now
is an
th-order polynomial in
, so Lagrange interpolation can be used to
exactly compute
from
samples of
,
.
Thus, given a set of extremal frequencies and
knowing
, samples of the amplitude response
can be computed
directly from the
without solving for the filter
coefficients!
This leads to computational savings!
Note that
is a set of
simultaneous equations, which can be solved for
to obtain (Rabiner, 1975)
where
The result is the Parks-McClellan FIR filter design method,
which is simply an application of the Remez exchange algorithmto the filter design problem. See
.
The cost per iteration is
, as opposed to
; much more efficient for large
. Can also interpolate to DFT sample frequencies,
take inverse FFT to get corresponding filter coefficients, andzeropad and take longer FFT to efficiently interpolate.