<< Chapter < Page | Chapter >> Page > |
The vector of filter coefficients that is actually used is only partially updated using a form of adjustable step size in the following second orderlinearly weighted sum
Using this filter coefficient vector, we solve for the next error vector by going back to [link] and this defines Karlovitz's IRLS algorithm [link] .
In this algorithm, is a convergence parameter that takes values . Karlovitz showed that for the proper , the IRLS algorithm using [link] always converges to the globally optimal approximation for an even integer in the range . At each iteration the error has to be minimized over which requires a line search. In other words, the full Karlovitz method requires a multi-dimensional weighted least squaresminimization and a one-dimensional power error minimization at each iteration. Extensions of Karlovitz's work [link] show the one-dimensional minimization is not necessary but practice shows thenumber of required iterations increases considerably and robustness in lost.
Fletcher et al. [link] and later Kahng [link] independently derive the same second order iterative algorithm by applying Newton'smethod. That approach gives a formula for as a function of and is discussed later in this paper. Although the iteration count for convergence of the Karlovitz method is good, indeed, perhaps the best ofall, the minimization of at each iteration causes the algorithm to be very slow in execution.
Both the new method in section 4.3 and Lawson's method use a second order updating of the weights to obtain convergence of the basic IRLS algorithm.Fletcher et al. [link] and Kahng [link] use a linear summation for the updating similar in form to [link] but apply it to the filter coefficients in the manner of Karlovitz rather thanthe weights as Lawson did. Indeed, using our development of Karlovitz's method, we see that Kahng's method and Fletcher, Grant, andHebden's method are simply a particular choice of as a function of in Karlovitz's method. They derive
by using Newton's method to minimize in [link] to give for [link]
This defines Kahng's method which he says always converges [link] . He also notes that the summation methods in the sections Calculation of the Fourier Transform and Fourier Series using the FFT, Sampling Functions-- the Shah Function, and Down-Sampling, Subsampling, or Decimation do not have the possible restarting problem that Lawson's method theoretically does. Because Kahng's algorithm is a form ofNewton's method, its asymptotic convergence is very good but the initial convergence is poor and very sensitive to starting values.
A modification and generalization of an acceleration method suggested independently by Ekblom [link] and by Kahng [link] is developed here and combined with the Newton's method of Fletcher, Grant,and Hebden and of Kahng to give a robust, fast, and accurate IRLS algorithm [link] , [link] . It overcomes the poor initial performance of the Newton's methods and the poor final performance of the RUL algorithms.
Rather than starting the iterations of the IRLS algorithms with the actual desired value of , after the initial approximation, the new algorithm starts with where is a parameter between one and approximately two, chosen for the particular problem specifications.After the first iteration, the value of is increased to . It is increased by a factor of at each iteration until it reaches the actual desired value. This keeps the value of being approximated just ahead of the value achieved. This is similar to a homotopy where we varythe value of from 2 to its final value. A small value of gives very reliable convergence because the approximation is achieved at eachiteration but requires a large number of iterations for to reach its final value. A large value of gives faster convergence for most filter specifications but fails for some. The rule that is used to choose at the iteration is
Notification Switch
Would you like to follow the 'Digital signal processing and digital filter design (draft)' conversation and receive update notifications?