Swapping rows to solve a system of equations using the Gauss-Seidel method

linear algebramatricessystems of equations

Consider the following system of equations:

\begin{align}
-2X + 5Y + 3Z + 2W = & \ 42 \notag \\
-X + 2Y – 2Z + 7W = & \ 44 \notag \\
6X + Y + Z + W = & \ 21 \notag \\
X – Y + 7Z + 4W = & \ 61 \notag
\end{align}

It is known that to solve the system of equations using the Gauss-Seidel method, the coefficients of the diagonal must be the highest coefficients of those variables.

My question:

If I have a matrix (like the one generated from the above system), in which the main diagonal doesn't have the largest coefficients, can I just swap rows to make it suitable for the Gauss-Seidel method?

My attempt:

I have looked a lot, but almost all examples already have the main diagonal with the largest coefficients.

Any help or hints are highly appreciated.

Best Answer

Solving the system produces

$$X= 1,Y= 3,Z= 5,W= 7$$

Yes, swapping rows is a legal operation and moreover, it is required for this method to converge, see: Switching Rows in the Gauss Seidel method.

We cannot put the system into diagonally dominant form (due to the second row), but we at least have the rows' dominant term in the diagonal

$$ A = \left( \begin{array}{cccc} 6 & 1 & 1 & 1 \\ -2 & 5 & 3 & 2 \\ 1 & -1 & 7 & 4 \\ -1 & 2 & -2 & 7 \\ \end{array} \right), ~~~ b = \left( \begin{array}{c} 21 \\ 42 \\ 61 \\ 44 \\ \end{array} \right)$$

Selecting a starting point of

$$x^{(1)} = \left( \begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \\ \end{array} \right)$$

This results of the iteration are

$$\left( \begin{array}{cccc} 1 & 1 & 1 & 1. \\ 3. & 8.6 & 8.94286 & 6.81224 \\ -0.559184 & 0.0857143 & 4.9137 & 7.58526 \\ 1.40255 & 2.9787 & 4.60502 & 6.95074 \\ 1.07759 & 3.28773 & 5.05817 & 6.9455 \\ 0.951435 & 2.96748 & 5.03344 & 7.01191 \\ 0.997863 & 2.97432 & 4.98983 & 7.00413 \\ 1.00529 & 3.00656 & 4.99782 & 6.99826 \\ 0.999559 & 3.00183 & 5.00132 & 6.99979 \\ 0.99951 & 2.9991 & 5.00006 & 7.00021 \\ 1.00011 & 2.99992 & 4.99986 & 7. \\ 1.00004 & 3.0001 & 5.00001 & 6.99998 \\ 0.999984 & 2.99999 & 5.00001 & 7. \\ 0.999998 & 2.99999 & 5. & 7. \\ 1. & 3. & 5. & 7. \\ 1. & 3. & 5. & 7. \\ \end{array} \right)$$

Compare that to the exact result above.

Even though it is not diagonally dominant, the iterations still converged. For the reasons why, see: Gauss-Seidel Iterative Method and Necessary condition for Gauss–Seidel method to converge

Update If we use the inital starting point as $(0,0,0,0)^T$, we get

$$\left( \begin{array}{cccc} 0 & 0 & 0 & 0. \\ 3.5 & 9.8 & 9.61429 & 6.73265 \\ -0.857823 & -0.404762 & 4.93178 & 7.68789 \\ 1.46418 & 2.95145 & 4.53367 & 6.94695 \\ 1.09466 & 3.33888 & 5.06521 & 6.93533 \\ 0.943431 & 2.96412 & 5.03991 & 7.01357 \\ 0.997067 & 2.96945 & 4.9883 & 7.00497 \\ 1.00621 & 3.00752 & 4.99735 & 6.99798 \\ 0.999525 & 3.00221 & 5.00154 & 6.99974 \\ 0.999419 & 2.99895 & 5.00008 & 7.00024 \\ 1.00012 & 2.9999 & 4.99983 & 7. \\ 1.00004 & 3.00012 & 5.00001 & 6.99998 \\ 0.999982 & 2.99999 & 5.00002 & 7. \\ 0.999998 & 2.99999 & 5. & 7. \\ 1. & 3. & 5. & 7. \\ 1. & 3. & 5. & 7. \\ \end{array} \right)$$

It converges just fine.

Related Question