<< Chapter < Page | Chapter >> Page > |
There exists a different recursive FFT that is optimal cache-oblivious, however, and that is the radix- “four-step” Cooley-Tukey algorithm (again executed recursively, depth-first) [link] . The cache complexity of this algorithm satisfies the recurrence:
That is, at each stage one performs DFTs of size (recursively), then multiplies by the twiddle factors (and does a matrix transposition to obtain in-order output),then finally performs another DFTs of size . The solution of [link] is , the same as the optimal cache complexity [link] !
These algorithms illustrate the basic features of most optimal cache-oblivious algorithms: they employ a recursive divide-and-conquerstrategy to subdivide the problem until it fits into cache, at which point the subdivision continues but no further cache misses arerequired. Moreover, a cache-oblivious algorithm exploits all levelsof the cache in the same way, so an optimal cache-oblivious algorithm exploits a multi-level cache optimally as well as a two-levelcache [link] : the multi-level “blocking” is implicit in the recursion.
Even though the radix- algorithm is optimal cache-oblivious, it does not follow that FFT implementation is a solved problem. Theoptimality is only in an asymptotic sense, ignoring constant factors, terms, etcetera, all of which can matter a great deal in practice. For small or moderate , quite different algorithms may be superior, as discussed in "Memory strategies in FFTW" . Moreover, real caches are inferior to an ideal cache in several ways. Theunsurprising consequence of all this is that cache-obliviousness, like any complexity-based algorithm property, does not absolve one from theordinary process of software optimization. At best, it reduces the amount of memory/cache tuning that one needs to perform, structuringthe implementation to make further optimization easier and more portable.
Perhaps most importantly, one needs to perform an optimization that has almost nothing to do with the caches: the recursion must be“coarsened” to amortize the function-call overhead and to enable compiler optimization. For example, the simple pedagogical code of thealgorithm in [link] recurses all the way down to , and hence there are function calls in total, so that every data point incurs a two-function-call overhead on average. Moreover, thecompiler cannot fully exploit the large register sets and instruction-level parallelism of modern processors with an function body. In principle, it might be possible for a compiler to automatically coarsen the recursion, similar to how compilers canpartially unroll loops. We are currently unaware of any general-purpose compiler that performs this optimization, however. These problems can be effectively erased, however, simply by making the base cases larger, e.g. the recursion could stop when is reached, at which point a highly optimized hard-coded FFT of thatsize would be executed. In FFTW, we produced this sort of large base-case using a specialized code-generation program described in "Generating Small FFT Kernels" .
Notification Switch
Would you like to follow the 'Fast fourier transforms' conversation and receive update notifications?