-
Home
- Large dft modules: 11, 13, 16,
- Large dft modules: 11, 13, 16,
- Large dft modules: 11, 13, 16,
The length 25 module does not follow the traditional Winograd approach. This module is an in-line code version of a common-factor 5x5 DFT. Each length 5 DFT is a prime-length convolutional module. The output unscrambling is included in the assignment statements at the end of the program. Some of the length 5 modules used in this program are implemented as scaled versions of conventional length 5 modules in order to save some multiplies by 1/4. The scaling factors are then compensated for by adjusting the twiddle factors. This module has three multiply sections, one for the row DFT's with a data expansion factor of 6/5, one for the twiddle factors (expansion=33/25) and on for the column DFT's (expansion=6/5).
Modules for lengths 11 and 13 are very similar in spirit to the length 19 and 17 modules. Derivations are presented for both the 11 and 13 length modules which are consistent with the listings, although these interpretations may not agree with the original intentions of the designer
[link] they are correct in the sense that the algorithms could have been derived in the stated manner. Both the modules are of prime length and they are implemented in Winograd's convolutional style.
FORTRAN listings for all five modules are included with this report in a subroutine form suitable for use in Burrus' PFA program
[link] . Addition and multiplication counts given are for complex input data.
17 module: 314 adds / 70 mpys
This module closely follows the traditional Winograd prime-length approach.
- Use the index map
to convert the DFT into a length 16 convolution, plus a correction term for the DC component.
- Reduce the length 16 convolution modulo all the irreducible factors of
. (Irreducible over the rationals).
From
data
From
data
From
data
- Reduce the convolution modulo
using Toom-Cook factors of
,
and
. This creates variables r35, r36, and r314.
- Reduce the modulo
convolution with an iterated Toom-Cook reduction using the factors
,
and
for the first step, and the factors
,
and
for the second step. The first step produces r310 and r39, and the second step computes r313, r312 and r311. This is exactly the reduction procedure used in Nussbaumer's
convolution algorithm.
- Patch up the DC term by adding the
reduction result to
.
- Use Nussbaumer's
convolution algorithm
[link] on r108-r115. This is the only exception to the strict use of transposing the tensor, as his algorithm saves two additions by computing the transposed reconstruction procedure in an obscure fashion. The result, however, is an exact calculation of the transpose. This reduction computes twenty-one values, r315-r335, which must be weighted by coefficients to produce the reconstructed
output, t115-t135.
- Weight the variables r31-r39, r310-r314 by coefficients to produce t11-t19, t110-t114.
- The reconstruction procedure for the
terms is a straightforward transpose of the reduction procedure.
- The
convolution result is reconstructed from the
(real)
and
(imaginary) vectors and mapped back to the outputs using the
reverse of the input map.
- All coefficients were computed using the author's QR decompositionlinear equation solver and are accurate to at least 14 places.
Source:
OpenStax, Large dft modules: 11, 13, 16, 17, 19, and 25. revised ece technical report 8105. OpenStax CNX. Sep 14, 2009 Download for free at http://cnx.org/content/col10569/1.7
Google Play and the Google Play logo are trademarks of Google Inc.