Solved – How to eigenfaces (PCA eigenvectors on face image data) be displayed as images

eigenvaluesimage processingpca

I am trying to clarify some concepts for face recognition. According to my understanding, given a training set of images with each image measuring 225 x 255 pixels, we will have a matrix of training images, n x (255 x 255).

Using PCA, we would be reducing the high dimensions of 255 x 255 to something smaller say 200.

However, i have seen cases when blogs display the eigenfaces. I would assume that the eigenfaces would have a dimension of 200. How would it be possible that the resulting eigenfaces image have the same dimensions as the original image? Although it seems that the eignfaces are much blurred.

Best Answer

PCA does dimensional reduction by expressing $D$ dimensional vectors on an $M$ dimensional subspace, with $M<D.$ The vector itself can be written as a linear combination of $M$ eigenvectors, where the eigenvector is itself a unit vector that lives in the $D$ dimensional space.

Consider, for example, a two dimensional space which we reduce to one dimension using PCA. We find that the principal eigenvector is the unit vector that points equally in the positive $\hat{x}$ and $\hat{y}$ direction, i.e. $$ \hat{v} = \frac{1}{\sqrt{2}} (\hat{x} + \hat{y}). $$ In this case I'm using the hat ($\hat{x}$) symbol to indicate that it's a unit vector. You can think of this as a one-dimensional line going through a two-dimensional plane. In our reduced space, we can express any point $w$ in the two dimensional space as a one-dimensional (or scalar) value by projecting it onto the eigenvector, i.e. by calculating $w \cdot \hat{v}.$ So the point $(3,2)$ becomes $5/\sqrt{2},$ etc. But the eigenvector $\hat{v}$ is still expressed in the original two dimensions.

In general, we express a $D$ dimensional vector, $x,$ as a reduced $M$ dimensional vector $a$, where each component $a_i$ of $a$ is given by, $$ a_i = \sum_j x_j V_{i j} $$ where $V_{i j}$ is the $j$th component of the $i$'th eigenvector, and $i = 1, \dots, M$ and $j = 1, \dots, D.$ For that to work, the $i$th eigenvector must have $D$ components to take an inner product with $x$.

In your case, you can express a "reduced" vector of 200 components by taking the original image, a vector of 65025 components, and taking its inner product with each of the 200 images, each of which has 65025 components. Each inner product result is a component of your 200-dimensional vector. We expect each eigenvector to have the same number of dimensions as the original space. That is, we expect $M$ eigenvectors, each of which are $D$-dimensional.