<< Chapter < Page | Chapter >> Page > |
A fast Fourier transform , or FFT , is not a new transform, but is a computationally efficient algorithm for the computingthe DFT . The length- DFT, defined as
It is now known that C.F. Gauss invented an FFT in 1805 or so to assist the computation of planetary orbits via discrete Fourier series . Various FFT algorithms were independently invented over the next twocenturies, but FFTs achieved widespread awareness and impact only with the Cooley and Tukey algorithm published in 1965, which cameat a time of increasing use of digital computers and when the vast range of applications of numerical Fourier techniques was becoming apparent.Cooley and Tukey's algorithm spawned a surge of research in FFTs and was also partly responsible for the emergence of Digital Signal Processing (DSP) as adistinct, recognized discipline. Since then, many different algorithms have been rediscovered or developed,and efficient FFTs now exist for all DFT lengths.
The main strategy behind most FFT algorithms is to factor a length- DFT into a number of shorter-length DFTs, the outputs of which are reused multipletimes (usually in additional short-length DFTs!) to compute the final results.The lengths of the short DFTs correspond to integer factors of the DFT length, , leading to different algorithms for different lengths and factors.By far the most commonly used FFTs select to be a power of two, leading to the very efficient power-of-two FFT algorithms , including the decimation-in-time radix-2 FFT and the decimation-in-frequency radix-2 FFT algorithms, the radix-4 FFT ( ), and the split-radix FFT . Power-of-two algorithms gain their high efficiencyfrom extensive reuse of intermediate results and from the low complexity of length-2 and length-4DFTs, which require no multiplications. Algorithms for lengths with repeated common factors (such as 2 or 4 in the radix-2 and radix-4 algorithms, respectively) require extra twiddle factor multiplications between the short-length DFTs, which together leadto a computational complexity of , a very considerable savings over direct computation of the DFT.
The other major class of algorithms is the Prime-Factor Algorithms (PFA) . In PFAs, the short-length DFTs must be of relatively prime lengths.These algorithms gain efficiency by reuse of intermediate computations and by eliminating twiddle-factor multiplies,but require more operations than the power-of-two algorithms to compute the short DFTs of various prime lengths. In the end, the computational costs of the prime-factorand the power-of-two algorithms are comparable for similar lengths, as illustrated in Choosing the Best FFT Algorithm . Prime-length DFTs cannot be factored into shorter DFTs,but in different ways both Rader's conversion and the chirp z-transform convert prime-length DFTs into convolutions of other lengths that can be computed efficiently using FFTsvia fast convolution .
Some applications require only a few DFT frequency samples, in which case Goertzel's algorithm halves the number of computations relative to the DFT sum. Other applications involve successive DFTs of overlappedblocks of samples, for which the running FFT can be more efficient than separate FFTs of each block.
Notification Switch
Would you like to follow the 'The dft, fft, and practical spectral analysis' conversation and receive update notifications?