<< Chapter < Page | Chapter >> Page > |
The coefficients of the best approximation can then be obtained as a vector , as long as the Gram matrix is invertible, i.e., it has a nonzero determinant.
In the case that and are complex-valued vectors, one can rewrite the approximation , where is the coefficient vector denoted above and is a matrix that collects the vectors as its columns. The projection theorem requirement then becomes for , which can be rewritten as and collected as before into the matrix equation
which is known as the least squares solution and exists as long as is an invertible matrix. Once these coefficients are obtained, the approximation is equal to ; therefore, the matrix is known as the projection operator.
We consider a linear channel with impulse response that maps an input into an output :
We wish to design a linear equalizer of impulse response for the input so that after it is run through the equalizer and the channel of impulse response the output (i.e., is as close as possible to ):
Since the equalizer is linear, the order of and can be reversed (this will be discussed in more detail later):
Our design for will be a finite impulse response filter with tap coefficients ; the mapping from input to output index is therefore given by
The error in approximating is given by
The total error magnitude over observations is given by
We want to pose this question in terms of error of approximation into a subspace:
By convention, we assume that the values and for (i.e., is the time of the first observation). It can be easily seen that
formulating requires a separate study of the sum in [link] for each value of . For ,
and so terms can be ignored. For ,
Similarly, for ,
Continuing until ,
The concatenation of these sums as a vector can then be expressed by the matrix-vector product , where the matrix is given by
Note that for to have linearly independent columns (a condition for uniqueness of the solution to ) the number of nonzero values of must be at least . In this case, the solution
minimizes the error as established in the Projection Theorem.
In linear regression, we are given a set of input/output pairs , , and we wish to find a linear relationship between inputs and outputs that minimize the sum of squared errors . As in previous examples, we seek to pose this minimization problem in terms of the problem considered by the projection theorem: the error , where is a matrix, is a vector, and is the optimization variable vector. One can easily see that the following choice achieves the desired equality:
As before, the solution that minimizes the error is given by
which exists and is unique as long as is invertible, i.e., as long as has linearly independent columns, i.e., as long as not all are equal. Now, we see that
Collecting these results, we have that
We have studied several examples where an optimization problem can be formulated as
where is a matrix and and are column vectors of appropriate sizes.
Notification Switch
Would you like to follow the 'Signal theory' conversation and receive update notifications?