[Math] Calculating Eigenvectors: Is the book wrong

eigenvalues-eigenvectorslinear algebra

I have a covariance matrix:
$$
S= \begin{pmatrix}
16 & 10 \\
10 & 25
\end{pmatrix}
$$

I calculate my eigenvalues correctly (the same as what the book finds);

$\lambda_1 = 31.47$ , $\lambda_2 = 9.53$

But now it comes to calculating eigenvectors:
I do everything as I was taught way back in Elementary Linear Algebra.

  1. $S X = \lambda v$ {where v is the eigenvector}

  2. $(S – I \lambda)v$

  3. Get Row-Echelon Form

But when I do this I get the following reduced matrix:

$$
\begin{pmatrix}
1 & -.646412 & 0 \\
0 & 0 &0
\end{pmatrix}
$$

But this result doesn't seem consistent with my textbook which says that the eigenvectors are;

$(0.54 , 0.84)^T$ and $(0.84 , -0.54)$

I looked online for calculators and found one consistent with the book and a few consistent with my result:

Consistent with Book: http://comnuan.com/cmnn01002/

Consistent with Me: http://www.arndt-bruenner.de/mathe/scripts/engl_eigenwert2.htm

Any ideas?

Additional Information:

  • This problem stems from Principal Component Analysis

Best Answer

TLDR: The answers are the same.

The vectors $(0.646586,1)$ and $(0.54,0.84)$ go in (almost) the same direction (the only differences due to rounding and the magnitude of the vector). The first has the benefit of one of the entries equalling one. The second has the benefit that its magnitude is (almost) $1$, but they both give essentially the same information.

Remember that an eigenvector for a specific eigenvalue $\lambda$ is any vector such that $Av=\lambda v$ and these vectors collectively make up an entire subspace of your vector space, referred to as the eigenspace for the eigenvector $\lambda$. In the problem of determining eigenvalues and corresponding eigenvectors, you need only find some collection of eigenvectors such that they form a basis for each corresponding eigenspace. There are infinitely many correct choices for such eigenvectors.