<< Chapter < Page | Chapter >> Page > |
Singular value decomposition (SVD) can be thought of as an extension to eigenvalue decomposition for non-symmetric matrices. Consider an matrix . The following two matrices are symmetric and so have eigenvalue decompositions
where is an matrix and is an matrix. It turns out that we can therefore decompose the matrix as , where is an “diagonal” matrix: are the singular values of , and for . The pseudoinverse of the matrix can then be written as , where and for .
Principal component analysis can be thought of as KLT for sampled data. Assume that is a zero-mean dataset, and collect it into a matrix . Next, compute the SVD with the corresponding eigenvalue decomposition . The matrix is known as the principal component analysis (PCA) matrix of ; its columns are known as principal components, and its PCA coefficients are given by . The matrix contains the “scores” of all data points in the columns of against the principal components . One can show that the principal components in the matrix follow the formulation
In words, is the direction in which the projections of the data has the largest variance while being orthogonal to .
Notification Switch
Would you like to follow the 'Signal theory' conversation and receive update notifications?