<< Chapter < Page | Chapter >> Page > |
Figure 1. Ten Cases for the Pseudoinverse.
Here we have:
This is a setting for frames and sparse representations.
In case 1a and 3a, is necessarily in the span of . In addition to these classifications, the possible orthogonality of thecolumns or rows of the matrices gives special characteristics.
Case 1: Here we see a 3 x 3 square matrix which is an example of case 1 in Figure 1 and 2.
If the matrix has rank 3, then the vector will necessarily be in the space spanned by the columns of which puts it in case 1a. This can be solved for by inverting or using some more robust method. If the matrix has rank 1 or 2, the may or may not lie in the spanned subspace, so the classification will be 1b or 1c and minimization of yields a unique solution.
Case 2: If is 4 x 3, then we have more equations than unknowns or the overspecified or overdetermined case.
If this matrix has the maximum rank of 3, then we have case 2a or 2b depending on whether is in the span of or not. In either case, a unique solution exists which can be found by [link] or [link] . For case 2a, we have a single exact solution with no equation error, just as case 1a. For case 2b, we have a single optimal approximate solution with the least possible equation error. If the matrix hasrank 1 or 2, the classification will be 2c or 2d and minimization of yelds a unique solution.
Case 3: If is 3 x 4, then we have more unknowns than equations or the underspecified case.
If this matrix has the maximum rank of 3, then we have case 3a and must be in the span of . For this case, many exact solutions exist, all having zero equation error and a single one can be found with minimum solution norm using [link] or [link] . If the matrix has rank 1 or 2, the classification will be 3b or 3c.
There are several assumptions or side conditions that could be used in order to define a useful unique solution of [link] . The side conditions used to define the Moore-Penrose pseudo-inverse are that the norm squared of the equation error be minimized and, if there is ambiguity (several solutions with the same minimum error), the norm squared of also be minimized. A useful alternative tominimizing the norm of is to require certain entries in to be zero (sparse) or fixed to some non-zero value (equality constraints).
Notification Switch
Would you like to follow the 'Basic vector space methods in signal and systems theory' conversation and receive update notifications?