<< Chapter < Page | Chapter >> Page > |
Suppose our inner product space or with the standard inner product (which induces the -norm).
Re-examining what we have just derived, we can write our approximation , where is an matrix given by
and is an vector given by
Given (or ), our search for the closest approximation can be written as
or as
Using , we can replace and . Thus, our solution can be written as
which yields the formula
The matrix is known as the “pseudo-inverse.” Why the name “pseudo-inverse”? Observe that
Note that . We can verify that is a projection matrix since
Thus, given a set of linearly independent vectors in or ( ), we can use the pseudo-inverse to project any vector onto the subspacedefined by those vectors. This can be useful any time we have a problem of the form:
where denotes a set of known “observations”, is a set of known “expansion vectors”, are the unknown coefficients, and represents an unknown “noise” vector. In this case, the least-squares estimate is given by
Notification Switch
Would you like to follow the 'Digital signal processing' conversation and receive update notifications?