<< Chapter < Page | Chapter >> Page > |
Definition 1 A scalar is an eigenvalue of if there exists a vector (dubbed the eigenvector for ) such that .
Multiples of eigenvectors are also eigenvectors, as shown below:
Definition 2 The eigenspace of A corresponding to is defined by .
For example, if a given eigenvalue has two eigenvectors, the eigenspace is given by where and .
Definition 3 An operator is said to be self-adjoint if , i.e., for all .
If , a self-adjoint operator corresponds to a symmetric matrix.
Theorem 1 All eigenvalues of a self-adjoint operator are real.
Let be a complex eigenvalue of a self-adjoint operator . Then for some . For such an , we have
Therefore we know that lambda is the same as its complex conjugate ( ). The only way for this to be possible is if the imaginary part of is zero, and therefore .
Theorem 2 If are distinct eigenvalues of a self-adjoint operator , then .
Assume we pick some arbitrary and . Then,
Therefore we know that . Since we chose two distinct eigenvalues, we know that . Therefore we must have . This implies . Since and were chosen arbitrarily, this implies .
Theorem 3 (Schur's Lemma) Let be an N-dimensional Hilbert space and . Then there exists an orthonormal basis for and a set of coefficients such that for .
An important note: since makes an orthonormal basis for the domain and range , there exist coefficients such that for .
Example 1 Recall the matrix representation of an operator: given and an orthonormal basis for , we can write the matrix representation of the operator with entries
We can represent as an upper-triangular matrix (where represents a non-zero entry in the matrix):
This shows that there exists an orthonormal basis for which the matrix representation of is an upper-triangular matrix.
Theorem 4 If is an -dimensional space and , then there exists an orthonormal basis such that for a set of scalars . The matrix representation is a diagonal matrix whose entries are the eigenvalues; that is, .
Note that, according to the theorem, we can fully represent the operator by the aforementioned orthonormal basis and the diagonal matrix .
Pick the orthonormal basis specified by Schur's Lemma. For , we have:
For , the term in the sum is equal to zero. We then have:
Thus, the only non-zero entries of the representation matrix are the diagonal entries. Furthermore, these entries are eigenvalues of .
Recall that to compute using its matrix representation , there are three steps:
A matrix-vector product with a dense matrix is computationally intensive. If we can diagonalize , we can find the matrix-vector product by performing the operation , where is a diagonal matrix, making the operation more efficient.
Notification Switch
Would you like to follow the 'Signal theory' conversation and receive update notifications?