<< Chapter < Page | Chapter >> Page > |
Decomposition means that the polynomial is written as the composition of two polynomials: here, is inserted into . Note that this is a special property: most polynomials do not decompose.
Based on this polynomial decomposition, we obtain the following stepwise decomposition of , which is more general than the previous one in [link] – [link] . The basic idea is to first decompose with respect to the outer polynomial , , and then completely [link] :
As bases in the smaller algebras we choose . As before, the derivation is completely mechanical from here: only the three matrices corresponding to [link] – [link] have to be read off.
The first decomposition step requires us to compute , . To do so, we decompose the index as and compute
This shows that the matrix for [link] is given by .
In step [link] , each is completely decomposed by its polynomial transform
At this point, is completely decomposed, but the spectrum is ordered according to , , ( runs faster). The desired order is .
Thus, in step [link] , we need to apply the permutation , which is exactly the stride permutation in [link] .
In summary, we obtain the Cooley-Tukey decimation-in-frequency FFT with arbitrary radix:
The matrix is diagonal and usually called the twiddle matrix . Transposition using [link] yields the corresponding decimation-in-time version:
This chapter only scratches the surface of the connection between algebra and the DFT or signal processing in general. We providea few references for further reading.
As mentioned before, the use of polynomial algebras and the CRT underlies much of the early work on FFTs and convolution algorithms [link] , [link] , [link] . For example, Winograd's work on FFTs minimizes the number of non-rational multiplications.This and his work on complexity theory in general makes heavy use of polynomial algebras [link] , [link] , [link] (see Chapter Winograd’s Short DFT Algorithms for more information and references). See [link] for a broad treatment of algebraic complexity theory.
Since can be viewed a group algebra for the cyclic group, the methods shown in this chapter can be translated intothe context of group representation theory. For example, [link] derives the general-radix FFT using group theory and also uses already the Kronecker product formalism. So does Bethand started the area of FFTs for more general groups [link] , [link] . However, Fourier transforms for groups have found only sporadic applications [link] . Along a related line of work, [link] shows that using group theory it is possible that to discover and generate certain algorithmsfor trigonometric transforms, such as discrete cosine transforms (DCTs), automatically using a computer program.
More recently, the polynomial algebra framework was extended to include most trigonometric transforms used in signal processing [link] , [link] , namely, besides the DFT, the discrete cosine and sine transforms and various real DFTs including the discrete Hartley transform. It turns out that the sametechniques shown in this chapter can then be applied to derive, explain, and classify most of the known algorithms for thesetransforms and even obtain a large class of new algorithms including general-radix algorithms for the discrete cosine and sine transforms(DCTs/DSTs) [link] , [link] , [link] , [link] .
This latter line of work is part of the algebraic signal processing theory briefly discussed next.
The algebraic properties of transforms used in the above work on algorithm derivation hints at a connection between algebra and(linear) signal processing itself. This is indeed the case and was fully developed in a recent body of work called algebraic signalprocessing theory (ASP). The foundation of ASP is developed in [link] , [link] , [link] .
ASP first identifies the algebraic structure of (linear) signal processing: the common assumptions on available operations for filtersand signals make the set of filters an algebra and the set of signals an associated -module . ASP then builds a signal processing theory formally from the axiomaticdefinition of a signal model : a triple , where generalizes the idea of the -transform to mappings from vector spaces of signal values to . If a signal model is given, other concepts, such as spectrum, Fourier transform, frequencyresponse are automatically defined but take different forms for different models. For example, infinite and finite time as discussedin [link] are two examples of signal models. Their complete definition is provided in [link] and identifies the proper notion of a finite -transform as a mapping .
Signal model | Infinite time | Finite time |
defined in [link] | defined in [link] |
ASP shows that many signal models are in principle possible, each with its own notion of filtering and Fourier transform. Those that supportshift-invariance have commutative algebras. Since finite-dimensional commutative algebras are precisely polynomial algebras, theirappearance in signal processing is explained. For example, ASP identifies the polynomial algebras underlying the DCTs and DSTs, whichhence become Fourier transforms in the ASP sense. The signal models are called finite space models since they support signal processing based on an undirected shift operator, different from thedirected time shift. Many more insights are provided by ASP including the need for and choices in choosing boundary conditions, propertiesof transforms, techniques for deriving new signal models, and the concise derivation of algorithms mentioned before.
Notification Switch
Would you like to follow the 'Fast fourier transforms' conversation and receive update notifications?