<< Chapter < Page | Chapter >> Page > |
which can be minimized by solving
with the normal equations
where is an by diagonal matrix with the weights from [link] along the diagonal. A more general formulation of the approximation simply requires to be positive definite. Some authors define the weighted error in [link] using rather than . We use the latter to be consistent with the least squared error algorithms in Matlab [link] .
Solving [link] is a direct method of designing an FIR filter using a weighted least squared error approximation. To minimize the sum of thesquared error and get approximately the same result as minimizing the integral of the squared error, one must choose to be 3 to 10 or more times the length of the filter being designed.
There is no simple direct method for finding the optimal approximation for any error power other than two. However, if the weighting coefficients as elements of in [link] could be set equal to the elements in , minimizing [link] would minimize the fourth power of . This cannot be done in one step because we need the solution to find the weights! We can, however, pose an iterative algorithm whichwill first solve the problem in [link] with no weights, then calculate the error vector from [link] which will then be used to calculate the weights in [link] . At each stage of the iteration, the weights are updated from the previous error and the problem solved again.This process of successive approximations is called the iterative reweighted least squared error algorithm (IRLS).
The basic IRLS equations can also be derived by simply taking the gradient of the -error with respect to the filter coefficients or and setting it equal to zero [link] , [link] . These equations form the basis for the iterative algorithm.
If the algorithm is a contraction mapping [link] , the successive approximations will converge and the limit is the solution of the minimum approximation problem. If a general problem can be posed [link] , [link] , [link] as the solution of an equation in the form
a successive approximation algorithm can be proposed which iteratively calculates using
starting with some . The function maps into and, if
where , is the fixed point of the mapping and a solution to [link] . The trick is to find a mapping that solves the desired problem, converges, and converges fast.
By setting the weights in [link] equal to
the fixed point of a convergent algorithm minimizes
It has been shown [link] that weights always exist such that minimizing [link] also minimizes [link] . The problem is to find those weights efficiently.
The basic IRLS algorithm is started by initializing the weight matrix defined in [link] and [link] for unit weights with . Using these weights to start, the iteration solves [link] for the filter coefficients with
This is a formal statement of the operation. In practice one should not invert a matrix, one should use a sophisticated numerical method [link] to solve the overdetermined equations in [link] The error or residual vector [link] for the iteration is found by
Notification Switch
Would you like to follow the 'Digital signal processing and digital filter design (draft)' conversation and receive update notifications?