Convert any non-singular square matrix to a strictly diagonally dominant one using only elementary row operations

gaussian eliminationmatrices

Elementary row operations

  1. Swap the positions of two of the rows.
  2. Multiply one of the rows by a nonzero scalar.
  3. Add or subtract the scalar multiple of one row to another row.

Strictly diagonally dominant matrix

An $n\times n$ matrix $A$ is strictly diagonally dominant is the absolute value of each entry on the main diagonal is greater than the sum of the absolute values of the other entries in the same row.
$$|a_{ii}|>\sum^n_{j\ne i}|a_{ij}|, \text{ for all i}$$

Some non-singular matrices can be converted into a strictly diagonally dominant one using only elementary row operations. For example:
\begin{pmatrix}
2 & 6 & -1 \\
6 & 15 & 2\\ 1 & 1 & 54
\end{pmatrix}

$15R_1 – 6R_2$
\begin{pmatrix}
-6 & 0 & -27 \\
6 & 15 & 2\\ 1 & 1 & 54
\end{pmatrix}

$2R_1 +R_3$
\begin{pmatrix}
-11 & 1 & 0 \\
6 & 15 & 2\\ 1 & 1 & 54
\end{pmatrix}

However I have not yet found a concrete algorithm which is guaranteed to form a strict diagonally dominant matrix from any starting matrix.

My question is inspired by the condition of convergence of Gauss-Jacobi and Gauss-Seidel methods:

A sufficient (but not necessary) condition for the Gauss-Jacobi/Gauss-Seidel iterative methods to converge is that the matrix $A$ is strictly or irreducibly diagonally dominant.

Source

Note

The transformed matrix should be in a form that can be used for the Gauss-Jacobi/Gauss-Seidel method. Applying Gaussian elimination and then multiplying each row by some non-zero constant yields a strict diagonally dominant matrix. However matrices in this form cannot be used for the Gauss-Jacobi/Gauss-Seidel method. This means that on top of being strictly diagonally dominant, each row of the matrix should contain at least 2 non-zero values.

Matrix below is invalid for Gauss-Jacobi/seidel:
1 0 0 0 
0 2 0 0 
0 0 3 0 
0 0 0 4 

Matrix below is valid for Gauss-Jacobi/seidel:
5 1 0 0 
0 2 0 -1 
0 1 3 -1 
1 0 0 4 

My attempt

Here's an algorithm I came up with for obtaining a non-strict diagonally dominant matrix:

Given a non-singular $n\times n$ square matrix,

  1. Convert matrix to reduced row echelon form using Gaussian elimination. Result is an identity matrix:
1 0 0 0 ... 0
0 1 0 0 ... 0
0 0 1 0 ... 0
.     .     .
.       .   .
0 0 ... 0 1 0
0 0 ... 0 0 1
  1. Starting from top row, add each row to the row directly below it $(R_1 + R_2, R_2+R_3, …)$. Each row, except last row, will contain the pattern 11 :
1 1 0 0 ... 0
0 1 1 0 ... 0
0 0 1 1 ... 0
.     .     .
.       .   .
0 0 ... 0 1 1
0 0 ... 0 0 1
  1. Perform $R_n + R_{n-1}$
1 1 0 0 ... 0
1 2 1 0 ... 0
0 1 2 1 ... 0
.     .     .
.       .   .
0 0 ... 1 2 1
0 0 ... 0 1 2

At this point, we obtain a non-strict diagonally dominant matrix.

Is it possible to extend this algorithm to obtain a strictly diagonally dominant matrix? Or is there an algorithm for converting any non-singular square matrix into a strictly diagonally dominant one using only elementary row operations?

Updates

Thanks to Jean Marie
I found an algorithm in Matlab that rearranges rows for Gauss-Seidel. However, I was unable to execute the code on an online compiler due to a bug in the code. Also, it looks like the algorithm has some limitations:

  • It assumes that each row of the matrix has a dominant term.
  • It is only capable of row swapping.

Best Answer

Unless I'm missing something, in the 2nd and 3rd phase of your attempt, you can add a non-unity and sufficiently small multiple of a row to another. Check this out:

Given any invertible matrix $A$:

1- Apply Gaussian elimination to $A$ to yield the identity matrix $I_n$.

Result: $$\begin{bmatrix}1&0&0&\cdots &0&0\\0&1&0&\cdots &0&0\\0&0&1&\cdots &0&0\\\vdots&\vdots&\vdots&\cdots&\vdots&\vdots\\0&0&0&\cdots &1&0\\0&0&0&\cdots&0 &1\\\end{bmatrix}.$$

2- Add to each row an $\epsilon$-multiple of its next row, when possible (this leaves you with a $1\epsilon$-pattern matrix, rather than a $11$-pattern matrix).

Result: $$\begin{bmatrix}1&\epsilon&0&\cdots &0&0\\0&1&\epsilon&\cdots &0&0\\0&0&1&\cdots &0&0\\\vdots&\vdots&\vdots&\cdots&\vdots&\vdots\\0&0&0&\cdots &1&\epsilon\\0&0&0&\cdots&0 &1\\\end{bmatrix}.$$

3- Add to each row an $\epsilon$-multiple of its previous row, when possible (do it from the last row to the first, in the reversed order of phase 2).

Result: $$\begin{bmatrix}1&\epsilon&0&\cdots&0 &0&0\\\epsilon&1+\epsilon^2&\epsilon&\cdots&0 &0&0\\0&\epsilon&1+\epsilon^2&\cdots &0&0&0\\\vdots&\vdots&\vdots&\cdots&\vdots&\vdots&\vdots\\0&0&0&\cdots&\epsilon &1+\epsilon^2&\epsilon\\0&0&0&\cdots&0&\epsilon &1+\epsilon^2\\\end{bmatrix}.$$