[Math] Necessary condition for Gauss–Seidel method to converge

linear algebranumerical methods

In the wiki page of the Jacobi Method it's said:

A sufficient (but not necessary) condition for the method to converge is that the matrix $A$ is strictly or irreducibly diagonally dominant.

I know that Gauss–Seidel method converges if $A$ is strictly diagonally dominant or symmetric positive define. But I was wondering if it's a sufficient condition or necessary. I have no clue how to prove it so I'm looking for an explanation.

Best Answer

Gauss-Seidel comes from a splitting of the matrix $A=M-N$. To solve $Ax=b$, the general form for such methods is

\begin{equation} x^{(k+1)} = M^{-1}Nx^{(k)} +M^{-1}b \end{equation}

and of course

\begin{equation} x = M^{-1}Nx +M^{-1}b \end{equation}

Combining theses

\begin{align} x - x^{(k+1)} &= M^{-1}N (x - x^{(k)}) \end{align}

So we have convergence for every initial guess $x^{(0)}$ iff the spectral radius of $M^{-1}N$ is less than 1.

If you are lucky and guess an eigenvector with eigenvalue less than 1 as a starting vector, then you can have convergence without the previous criterion.

Note : for Gauss-Seidel, the splitting is defined by $M$ being the lower triangular part of $A$ including the diagonal.