<< Chapter < Page | Chapter >> Page > |
mod
where and are constants and is a first degree polynomial. times and times are easy, but multiplying time modulo is more difficult.
The multiplication of times can be done by the Toom-Cook algorithm [link] , [link] , [link] which can be viewed as Lagrange interpolation or polynomial multiplication modulo a specialpolynomial with three arbitrary coefficients. To simplify the arithmetic, the constants are chosen to be plus and minus one andzero. The details of this can be found in [link] , [link] , [link] . For this example it can be verified that
which by the Toom-Cook algorithm or inspection is
where signifies point-by-point multiplication. The total matrix in [link] is a combination of [link] and [link] giving
where the matrix gives the residue reduction and , the upper left-hand part of gives the reduction modulo and , and the lower right-hand part of A1 carries out the Toom-Cook algorithm modulo with the multiplication in [link] . Notice that by calculating [link] in the three stages, seven additions are required. Also notice that is not square. It is this “expansion" that causes more than multiplications to be required in in [link] or in [link] . This staged reduction will derive the operator for [link]
The method described above is very straight-forward for the shorter DFT lengths. For , both of the residue polynomials are constants and the multiplication given by o in [link] is trivial. For , which is the example used here, there is one first degree polynomial multiplication required but the Toom-Cookalgorithm uses simple constants and, therefore, works well as indicated in [link] . For , there are two first degree residue polynomials which can each be multiplied by the sametechniques used in the example. Unfortunately, for any longer lengths, the residue polynomials have an order of three orgreater which causes the Toom-Cook algorithm to require constants of plus and minus two and worse. For that reason, the Toom-Cook methodis not used, and other techniques such as index mapping are used that require more than the minimum number of multiplications, but donot require an excessive number of additions. The resulting algorithms still have the structure of [link] . Blahut [link] and Nussbaumer [link] have a good collection of algorithms for polynomial multiplication that can be used with thetechniques discussed here to construct a wide variety of DFT algorithms.
The constants in the diagonal matrix can be found from the CRT matrix in [link] using ' for the diagonal terms in . As mentioned above, for the smaller prime lengths of 3, 5, and 7 this works well but for longer lengths the CRT becomesvery complicated. An alternate method for finding uses the fact that since the linear form [link] or [link] calculates the DFT, it is possible to calculate a known DFT of a given from the definition of the DFT in Multidimensional Index Mapping: Equation 1 and, given the matrix in [link] , solve for by solving a set of simultaneous equations. The details of this procedure are described in [link] .
Notification Switch
Would you like to follow the 'Fast fourier transforms' conversation and receive update notifications?