<< Chapter < Page | Chapter >> Page > |
The most commonly used FFT algorithms by far are the power-of-two-length FFT algorithms. The Prime Factor Algorithm (PFA) and Winograd Fourier Transform Algorithm (WFTA) require somewhat fewer multiplies, but the overall difference usually isn't sufficient to warrant the extra difficulty.This is particularly true now that most processors have single-cycle pipelined hardware multipliers, so the total operation count is more relevant.As can be seen from the following table, for similar lengths the split-radix algorithm is comparable in total operations to the Prime Factor Algorithm, and is considerably better than the WFTA, although thePFA and WTFA require fewer multiplications and more additions. Many processors now support single cycle multiply-accumulate (MAC) operations; in the power-of-two algorithms all multiplies can be combined with adds in MACs, so the number of additions is the mostrelevant indicator of computational cost.
FFT length | Multiplies (real) | Adds(real) | Mults + Adds | |
---|---|---|---|---|
Radix 2 | 1024 | 10248 | 30728 | 40976 |
Split Radix | 1024 | 7172 | 27652 | 34824 |
Prime Factor Alg | 1008 | 5804 | 29100 | 34904 |
Winograd FT Alg | 1008 | 3548 | 34416 | 37964 |
The choice of a power-of-two algorithm may not just depend on computational complexity. The latest extensions of the split-radix algorithm offer the lowest known power-of-two FFT operation counts, but the 10%-30% difference may not make up for other factors such as regularity of structure or data flow, FFT programming tricks , or special hardware features. For example, the decimation-in-time radix-2 FFT is the fastest FFT on Texas Instruments' TMS320C54x DSP microprocessors, because this processor family has special assembly-language instructions that accelerate this particular algorithm. On other hardware, radix-4 algorithms may be more efficient. Some devices, such as AMI Semiconductor's Toccata ultra-low-power DSP microprocessor family, have on-chip FFT accelerators; it is always faster and more power-efficient to use these accelerators and whatever radix they prefer. For fast convolution , the decimation-in-frequency algorithms may be preferred because the bit-reversing can be bypassed; however, most DSP microprocessors provide zero-overhead bit-reversed indexing hardware and prefer decimation-in-time algorithms, so this may not be true for such machines. Good, compiler- or hardware-friendly programming always matters more than modest differences in raw operation counts, so manufacturers' or good third-party FFT libraries are often the best choice.The module FFT programming tricks references some good, free FFT software (including the FFTW package) that is carefully coded to be compiler-friendly; such codes are likely to be considerably faster than codes written by the casual programmer.
Multi-dimensional FFTs pose additional possibilities and problems. The orthogonality and separability of multi-dimensional DFTs allows them to be efficiently computed by a series of one-dimensional FFTs along each dimension.(For example, a two-dimensional DFT can quickly be computed by performing FFTs of each row of the data matrix followed by FFTs of all columns, or vice-versa.) Vector-radix FFTs have been developed with higher efficiency per sample than row-column algorithms. Multi-dimensional datasets, however, are often large and frequently exceed the cache size of the processor, and excessive cache misses may increase the computational time greatly, thus overwhelming any minor complexity reduction from a vector-radix algorithm.Either vector-radix FFTs must be carefully programmed to match the cache limitations of a specific processor, or a row-column approach should be used with matrix transposition in between to ensure data locality for high cache utilization throughout the computation.
FFT algorithms gain their efficiency through intermediate computations that can be reused to compute many DFT frequency samples at once. Some applications require only a handful of frequency samples to be computed; when that number is of order less than , direct computation of those values via Goertzel's algorithm is faster. This has the additional advantage that any frequency, not just the equally-spaced DFT frequency samples,can be selected. Sorensen and Burrus developed algorithms for when most input samples are zero or only a block of DFT frequencies are needed, but the computational cost is of the same order.
Some applications, such as time-frequency analysis via the short-time Fourier transform or spectrogram , require DFTs of overlapped blocks of discrete-time samples. When the step-size between blocks is less than , the running FFT will be most efficient. (Note that any window must be applied via frequency-domain convolution, which is quite efficient for sinusoidal windows such as the Hamming window .) For step-sizes of or greater, computation of the DFT of each successive block via an FFT is faster.
Notification Switch
Would you like to follow the 'Digital signal processing: a user's guide' conversation and receive update notifications?