Solved – Do the principal components change if we apply PCA more than once (recursively) on data

data transformationeigenvaluespcaprojectionsvd

Consider a set $X=(X_1; \dots; X_n)$ of $n$ data points such that $X_i \in \mathbb{R}^d$ is a column vector. Let $Y = \text{pca_proj}(X)$ denote the projection of points in $X$ according to the PCA components i.e.
$$Y_i \in \mathbb{R}^d\\
Y_i = W X_i$$
where $W \in \mathbb{R}^{d \times d}$ is the projection matrix obtained from PCA, and each row of $W$ is an eigen vector of $X^T X$.

Now let's define $Z = \text{pca_proj}(Y)$. My questions is: does $Y = Z$? If not, what happens if we keep applying PCA recursively on a matrix? Does it converge to a specific matrix?

Best Answer

PCA at it's heart involves diagonalizing a matrix which means solving for the eigenvalues and eigenvectors of said matrix. The whole purpose of the calculation is to find a diagonal representation of your matrix (i.e. only elements along the diagonal of the matrix). If you solve again, you will find that you are trying to calculate the eigenvalues of a diagonal matrix, which yields the exact same diagonal matrix. Hence your PC vectors will be the same regardless of how many times you apply the transformation.

Related Question