How to Intuitively Understand Eigenvalue and Eigenvector

eigenvalues-eigenvectorsintuitionlinear algebrastatistics

I'm learning multivariate analysis and I have learnt linear algebra for two semester when I was a freshman.

Eigenvalue and eigenvector is easy to calculate and the concept is not difficult to understand.I found that there are many application of eigenvalue and eigenvector in multivariate analysis. For example

In principal components, proportion of total population variance due
to kth principal component equal
$$\frac{\lambda_k}{\lambda_1+\lambda_2+…\lambda_k}$$

I think eigenvalue product corresponding eigenvector has same effect as the matrix product eigenvector geometrically.

I think my former understanding may be too naive so that I cannot find the link between eigenvalue and its application in principal components and others.

I know how to induce almost every step form the assumption to the result mathematically. I'd like to know how to intuitively or geometrically understand eigenvalue and eigenvector in the context of multivariate analysis(in linear algebra is also good).

Thank you!

Best Answer

Personally, I feel that intuition isn't something which is easily explained. Intuition in mathematics is synonymous with experience and you gain intuition by working numerous examples. With my disclaimer out of the way, let me try to present a very informal way of looking at eigenvalues and eigenvectors.

First, let us forget about principal component analysis for a little bit and ask ourselves exactly what eigenvectors and eigenvalues are. A typical introduction to spectral theory presents eigenvectors as vectors which are fixed in direction under a given linear transformation. The scaling factor of these eigenvectors is then called the eigenvalue. Under such a definition, I imagine that many students regard this as a minor curiosity, convince themselves that it must be a useful concept and then move on. It is not immediately clear, at least to me, why this should serve as such a central subject in linear algebra.

Eigenpairs are a lot like the roots of a polynomial. It is difficult to describe why the concept of a root is useful, not because there are few applications but because there are too many. If you tell me all the roots of a polynomial, then mentally I have an image of how the polynomial must look. For example, all monic cubics with three real roots look more or less the same. So one of the most central facts about the roots of a polynomial is that they ground the polynomial. A root literally roots the polynomial, limiting it's shape.

Eigenvectors are much the same. If you have a line or plane which is invariant then there is only so much you can do to the surrounding space without breaking the limitations. So in a sense eigenvectors are not important because they themselves are fixed but rather they limit the behavior of the linear transformation. Each eigenvector is like a skewer which helps to hold the linear transformation into place.

Very (very, very) roughly then, the eigenvalues of a linear mapping is a measure of the distortion induced by the transformation and the eigenvectors tell you about how the distortion is oriented. It is precisely this rough picture which makes PCA very useful.

Suppose you have a set of data which is distributed as an ellipsoid oriented in $3$-space. If this ellipsoid was very flat in some direction, then in a sense we can recover much of the information that we want even if we ignore the thickness of the ellipse. This what PCA aims to do. The eigenvectors tell you about how the ellipse is oriented and the eigenvalues tell you where the ellipse is distorted (where it's flat). If you choose to ignore the "thickness" of the ellipse then you are effectively compressing the eigenvector in that direction; you are projecting the ellipsoid into the most optimal direction to look at. To quote wiki:

PCA can supply the user with a lower-dimensional picture, a "shadow" of this object when viewed from its (in some sense) most informative viewpoint