Solved – eigendecomposition of a covariance matrix

linear algebra

For a random variable $X = (x_1,x_2,\ldots,x_n)^T$, I understand that the entries of the covariance matrix would just be the covariance of $x_i$ and $x_j$, but how do I find the eigenvalues and eigenvectors after that, and how does that turn into the eigendecomposition of the covariance matrix?

Best Answer

Say the covariance matrix is $C$. The eigenvectors $\{v_1, ... v_n\}$ and eigenvalues $\{\lambda_1, ..., \lambda_n\}$ are the set of vectors/values such that $C v_i = \lambda_i v_i$. For numerical computations, these values can be found using an eigensolver, which is included as a standard function in many software libraries. There are various ways to compute them, which may be more or less efficient in different contexts.

The eigendecomposition is a way of expressing a matrix in terms of its eigenvectors and eigenvalues. Let $V$ be a matrix containing the eigenvectors of $C$ along its columns. Let $\Lambda$ be a matrix containing the corresponding eigenvalues along the diagonal, and zeros elsewhere. The eigendecomposition of $C$ is:

$$C = V \Lambda V^T$$