Does the eigenvalue that the Power Method converges to have to be “strictly” dominant

approximationeigenvalues-eigenvectorsmatricesnumerical linear algebranumerical methods

In almost all the text I can find on the Power method, the authors say something to this effect:

If we assume A has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector b_0 has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence (b_k) converges to an eigenvector associated with the dominant eigenvalue.
Without the two assumptions above, the sequence (b_k) does not necessarily converge.

source

I can intuitively understand the second condition. But can someone explain to me intuitively (or with an example) why an $n \times n$ matrix with, say, 2 (or 3 or maybe even n-1) equal eigenvalues which are greater than the remaining n-2(or n-3… or 1) eigenvalues could not converge?

What I understand is, if we have a single strictly dominant eigenvalue, then the initial guess $b_0$ (which has a component in the direction of the corresponding eigenvector) is bound to converge after some time to the eigenvector.

What I don't understand is, however, why this logic cannot be extended to more than 1 and written this way: "if we have dominant eigenvalue(s), then the initial guess $b_0$ (which has a component in the eigenspace corresponding to the dominant eigenvalue) is bound to converge after some time to a vector in the eigenspace." (Assuming of course that the matrix is diagonalisable which is a requirement for all cases anyway.)

Best Answer

The answer is “yes”: take, for example, the basic two-circulant matrix $$ C= \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix},$$ which has two dominant eigenvalues ($\pm 1$).

If $$x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix},$$ then $$Cx = \begin{bmatrix} x_2 \\ x_1 \end{bmatrix}$$ and

$$C^2x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}.$$

Thus, convergence fails in general.