<< Chapter < Page | Chapter >> Page > |
Recall that a set is a basis for a subspace if and has linearly independent elements. If is a basis for then each is uniquely determined by such that . In this sense, we could operate either with itself or with the vector . One would wonder then whether particular operations can be performed with a representation instead of the original vector .
Example 1 Assume have representations in a basis for . Can we say that ?
For the particular example of , so that , the set of all quadratic functions supported on . Pick and . One can see then that if we label , , , then the coefficient vectors for and are and , respectively. Let us compute both inner products:
Since , we find that we fail to obtain the desired equivalence between vectors and their representations.
While this example was unsuccessful, simple conditions on the basis will yield this desired equivalence, plus many more useful properties.
Several definitions of orthogonality will be useful to us during the course.
Definition 1 A pair of vectors and in an inner product space are orthogonal (denoted ) if the inner product .
Note that 0 is immediately orthogonal to all vectors.
Definition 2 Let be an inner product space. A set of vectors is orthogonal if for all .
Definition 3 Let be an inner product space. A set of vectors is orthonormal if is an orthogonal set and for all .
Definition 4 A vector in an inner product space is orthogonal to a set (denoted ) if for all .
Definition 5 Let be an inner product space. Two sets and are orthogonal (denoted ) if for all and .
Definition 6 The orthogonal complement of a set is the set of all vectors that are orthogonal to .
Why is orthonormality good? For many reasons. One of them is the equivalence of inner products that we desired in a previous example. Another is that having an orthonormal basis allows us to easily find the coefficients of in a basis .
Example 2 Let and be a basis for (i.e., ). We wish to find such that . Consider the inner products
due to the linearity of the inner product in the first term. If is orthonormal, then we have that for . In that case the sum above becomes
due to the orthonormality of . In other words, for an orthonormal basis one can find the basis coefficients as .
If is not orthonormal, then we can rewrite the sum above as the product of a row vector and a column vector as follows:
We can then stack these equations for to obtain the following matrix-vector multiplication:
The nomenclature given above provides us with the matrix equation , where and have entries and , respectively.
Definition 7 The matrix above is called the Gram matrix (or Gramian) of the set .
In the particular case of orthonormal , it is easy to see that , the identity matrix, and so as given earlier. For invertible Gramians , one could compute the coefficients in vector form as . For square matrices (like ), invertibility is linked to singularity.
Definition 8 A singular matrix is a non-invertible square matrix. A non-singular matrix is an invertible square matrix.
Theorem 1 A matrix is singular if for some . A matrix is non-singular if only for .
The link between this notion of singularity and invertibility is straightforward: if is singular, then there is some for which . Consider the mapping ; we would also have . Since , one cannot “invert” the mapping provided by into .
Theorem 2 is linearly independent if and only if is non-singular (i.e. if and only if ).
Proof: We will prove an equivalent statement: is linearly dependent if and only if is singular, i.e., if and only if there exists a vector such that .
We first prove that if is linearly dependent then is singular. In this case there exist a set , with at least one nonzero, such that . We then can write for each . Linearity allows us to take the sum and the scalar outside the inner product:
We can rewrite this equation in terms of the entries of the Gram matrix as . This sum, in turn, can be written as the vector inner product
which is true for every value of . We can therefore collect these equations into a matrix-vector product:
Therefore we have found a nonzero vector for which , and therefore is singular. Since all statements here are equalities, we can backtrack to prove the opposite direction of the theorem .
There are still more nice proper ties for orthogonal sets of vectors. The next one has well-known geometric applications.
Theorem 3 (Pythagorean theorem) If and are orthogonal ( ), then .
Proof:
Because and are orthogonal, and we are left with and . Thus:
Notification Switch
Would you like to follow the 'Signal theory' conversation and receive update notifications?