<< Chapter < Page | Chapter >> Page > |
If both and in [link] are considered to be signals in the same coordinate or basis system, the matrix operator is generally square. It may or may not be of full rank and it may or may not have a variety of otherproperties, but both and are viewed in the same coordinate system and therefore are the same size.
One of the most ubiquitous of these is convolution where the input to a linear, shift invariant system with impulse response is calculated by [link] if is the convolution matrix and is the input [link] .
It can also be calculated if is the arrangement of the input and is the the impulse response.
If the signal is periodic or if the DFT is being used, then what is called a circulate is used to represent cyclic convolution. An example for is the Toeplitz system
One method of understanding and generating matrices of this sort is to construct them as a product of first a decomposition operator, then amodification operator in the new basis system, followed by a recomposition operator. For example, one could first multiply a signal by the DFToperator which will change it into the frequency domain. One (or more) of the frequency coefficients could be removed (set to zero) and theremainder multiplied by the inverse DFT operator to give a signal back in the time domain but changed by having a frequency component removed. Thatis a form of signal filtering and one can talk about removing the energy of a signal at a certain frequency (or many) because of Parseval's theorem.
It would be instructive for the reader to make sense out of the cryptic statement “the DFT diagonalizes the cyclic convolution matrix" to add tothe ideas in this note.
For insight, algorithm development, and/or computational efficiency, it is sometime worthwhile to factor into a product of two or more matrices. For example, the matrix [link] illustrated in [link] can be factored into a product of fairly sparce matrices. If fact, the fast Fourier transform (FFT) can be derived byfactoring the DFT matrix into factors (if ), each requiring order multiplies. This is done in [link] .
Using eigenvalue theory [link] , a full rank square matrix can be factored into a product
where is a matrix with columns of the eigenvectors of and is a diagonal matrix with the eigenvalues along the diagonal. The inverse is a method to “diagonalize" a matrix
If a matrix has “repeated eigenvalues", in other words, two or more of the eigenvalues have the same value but less than indepentant eigenvectors, it is not possible to diagonalize the matrix butan “almost" diagonal form called the Jordan normal form can be acheived. Those details can be found in most books on matrix theory [link] .
A more general decompostion is the singular value decomposition (SVD) which is similar to the eigenvalue problem but allows rectangular matrices.It is particularly valuable for expressing the pseudoinverse in a simple form and in making numerical calculations [link] .
If our matrix multiplication equation is a vector differential equation (DE) of the form
or for difference equations and discrete-time signals or digital signals,
an inverse or even pseudoinverse will not solve for . A different approach must be taken [link] and different properties and tools from linear algebra will be used. The solution of this first order vector DE is acoupled set of solutions of first order DEs. If a change of basis is made so that is diagonal (or Jordan form), equation [link] becomes a set on uncoupled (or almost uncoupled in the Jordan form case) first order DEs and we know the solution of a first orderDE is an exponential. This requires consideration of the eigenvalue problem, diagonalization, and solution of scalar first order DEs [link] .
State equations are often used to model or describe a system such as a control system or a digital filter or a numerical algorithm [link] , [link] .
Notification Switch
Would you like to follow the 'Basic vector space methods in signal and systems theory' conversation and receive update notifications?