Why is the Power Method giving me one of the smaller eigenvalues instead of the dominant one

approximationeigenvalues-eigenvectorsnumerical linear algebra

I was looking at/working on something and had a bit of a confusion about the power method. When getting the eigenvector/eigenvalue by multiplying a matrix $A$ with a vector $b_0$, are you always only supposed to get the dominant eigenvalue's eigenvector? I used two different initial $b_0$'s, which had the same entries except for one which was the opposite sign, and got 2 of the eigenvectors for the matrix, both associated with two different eigenvalues of the original matrix. One was the largest absolute eigenvalue while the other one was the largest positive eigenvalue.

I guess I mostly want to ask, wasn't the power method only supposed to give you the dominant eigenvalue? Is there any reason why I'm instead getting 2 of the eigenvectors/eigenvalues instead if I use two similar initial $b_0$'s?

Best Answer

Probably your second choice of $b_0$ was perpendicular to the eigenspace of the dominant eigenvalue.

For example, let $$A=\begin{pmatrix}3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 1\end{pmatrix}.$$ Then, see that for any $\varepsilon>0$ we can define $$u=\begin{pmatrix}\varepsilon \\ 1 \\ 0\end{pmatrix}$$ and $$v=\begin{pmatrix}0 \\ 1 \\ 0\end{pmatrix}.$$

$u$ and $v$ are as close as we like, but $$A^n u = \begin{pmatrix}\varepsilon 3^n \\ 2^n \\ 0\end{pmatrix}$$ and $$A^n v = \begin{pmatrix} 0 \\ 2^n \\ 0\end{pmatrix}.$$

For large $n$, this coordinate $\varepsilon 3^n$ will dominate, but it vanished in $v$.