<< Chapter < Page | Chapter >> Page > |
In fact, this complexity is rigorously optimal for Cooley-Tukey FFT algorithms [link] , and immediately points us towards large radices (not radix 2!) to exploit caches effectively in FFTs.
However, there is one shortcoming of any blocked FFT algorithm: it is cache aware , meaning that the implementation depends explicitly on the cache size . The implementation must be modified (e.g. changing the radix) to adapt to different machines as the cachesize changes. Worse, as mentioned above, actual machines have multiple levels of cache, and to exploit these one must performmultiple levels of blocking, each parameterized by the corresponding cache size. In the above example, if there were a smaller and fastercache of size , the size- sub-FFTs should themselves be performed via radix- Cooley-Tukey using blocks of size . And so on. There are two paths out of these difficulties: one isself-optimization, where the implementation automatically adapts itself to the hardware (implicitly including any cache sizes), asdescribed in "Adaptive Composition of FFT Algorithms" ; the other is to exploit cache-oblivious algorithms. FFTW employs both of these techniques.
The goal of cache-obliviousness is to structure the algorithm so that it exploits the cache without having the cache size as a parameter:the same code achieves the same asymptotic cache complexity regardless of the cache size . An optimal cache-oblivious algorithm achieves the optimal cache complexity (that is, in an asymptotic sense, ignoring constant factors). Remarkably, optimalcache-oblivious algorithms exist for many problems, such as matrix multiplication, sorting, transposition, and FFTs [link] . Not all cache-oblivious algorithms are optimal, of course—forexample, the textbook radix-2 algorithm discussed above is “pessimal” cache-oblivious (its cache complexity is independent of because it always achieves the worst case!).
For instance, [link] (right) and the algorithm of [link] shows a way to obliviously exploit the cache with a radix-2 Cooley-Tukey algorithm,by ordering the computation depth-first rather than breadth-first. That is, the DFT of size is divided into two DFTs of size , and one DFT of size is completely finished before doing any computations for the second DFT of size . The two subtransforms are then combined using radix-2 butterflies, which requires a pass over the array and (hence cache misses if ). This process is repeated recursively until a base-case (e.g. size 2) is reached. The cache complexity of this algorithm satisfies the recurrence
The key property is this: once the recursion reaches a size , the subtransform fits into the cache and no further misses are incurred. The algorithm does not “know” this and continuessubdividing the problem, of course, but all of those further subdivisions are in-cache because they are performed in the samedepth-first branch of the tree. The solution of [link] is
This is worse than the theoretical optimum from [link] , but it is cache-oblivious ( never entered the algorithm) and exploits at least some temporal locality. This advantage of depth-first recursive implementation of the radix-2 FFT was pointed out many years agoby Singleton (where the “cache” was core memory) [link] . On the other hand, when it is combined with FFTW's self-optimization and larger radices in "Adaptive Composition of FFT Algorithms" , this algorithm actually performs very well until becomes extremely large. By itself, however, the algorithm of [link] must be modified to attain adequate performance for reasons that havenothing to do with the cache. These practical issues are discussed further in "Cache-obliviousness in practice" .
Notification Switch
Would you like to follow the 'Fast fourier transforms' conversation and receive update notifications?