Numerical Methods – Why Arnoldi Method Breakdown Implies Exact Solution?

numerical linear algebranumerical methods

If I want to solve $Ax = b$ with the Arnoldi Iteration, with $A$ nonsingular, if we end up getting $q_k = 0$ (using the notation from Wikipedia here), it is stated that

The algorithm breaks down when qk is the zero vector. This happens when the minimal polynomial of A is of degree k. In most applications of the Arnoldi iteration, including the eigenvalue algorithm below and GMRES, the algorithm has converged at this point.

I can see why the algorithm breaks down — it can no longer progress, since everything afterwards becomes $0$. But in this case, why can we say that a solution $x$ to $Ax = b$ lies in the subspace spanned by $q_1, \ldots, q_k$?

Best Answer

If we denote by $K_k=\mathrm{span}\{b,Ab,\ldots,A^{k-1}b\}$ the $k$-th Krylov subspace, we have $K_k=\mathrm{span}\{q_1,\ldots,q_k\}$. If Arnoldi algorithm breaks down at step $k$, it means that $A^kb\in K_k$ and $K_{k+1}=K_k$.

Now GMRES minimizes the 2-norm of the residual $r_k=b-Ax_k$. If $c_k=Ax_k$, this is equivalent to minimizing $\|b-c_k\|_2$ over $c_k\in AK_k$. But since $AK_k\subset K_k$, the correction $c_k$ is from the same subspace as $b$ and the minimizer is $c_k=b$, so $Ax_k=b$ and hence $x_k=x$.

EDIT: There was a wrong argument in my previous answer, now it should be correct :-)

Related Question