Solved – How to think of reduced dimensions in PCA on facial images (eigenfaces)

dimensionality reductionimage processingpca

I've been reading up a bit on eigenfaces. I think I understand the basic concept of it – vectorize a set of facial images then reduce the dimensionality of the images using PCA. What I don't really understand is the visualization of the lower-dimensional representation of the images.

In the facial images, the number of dimensions is the number of pixels so if you reduce the dimensionality of an image, you reduce the number of pixels. But then how do you visualize this image? Is it just a much smaller version of the full-dimensional original? The examples that I have seen do not look like this. Or do you alternatively make each pixel bigger so that the overall image is the same size as the original?

Best Answer

Just a hint, after reading your comment. Each image (face) is represented as a stacked vector of length $N$. The different faces make up a dataset stored in a matrix $X$ of size $K\times N$. You might be confused about the fact that you use the PCA to obtain a set of eigenvectors (eigenfaces) $I = \{u_1, u_2, \ldots, u_D\}$ of the covariance matrix $X^TX$, where each $u_i \in \mathbb{R}^{N}$. You don't reduce the number of pixels used to represent a face, but rather you find a small number of eigenfaces that span a space which suitably represents your faces. The eigenfaces still live in the original space though (they have the same number of pixels as the original faces).

The idea is, that you use the obtained eigenfaces as a sort of archetypes that can be used to perform face detection.

Also, purely in terms of storage costs, imagine you have to keep an album of $K$ faces, each composed of $N$ pixels. Instead of keeping all the $K$ faces, you just keep $D$ eigenfaces, where $D \ll K$, together with the component scores and you can recreate any face (with a certain loss in precision).