I answered this question in another post. Here it is:
We only need to show that after eliminating $a_{2,1}$, diagonal dominance is preserved, i.e.,
$$
\left|a_{2,2}-a_{1,2}{a_{2,1}\over a_{1,1}}\right|\ge\sum_{i=3}^n\left|a_{2,i}-a_{1,i}{a_{2,1}\over a_{1,1}}\right|,
$$
which is equivalent to
$$
|a_{2,2}a_{1,1}-a_{1,2}a_{2,1}|\ge\sum_{i=3}^n|a_{2,i}a_{1,1}-a_{1,i}a_{2,1}|.
$$
But this is true:
\begin{eqnarray*}
\sum_{i=3}^n|a_{2,i}a_{1,1}-a_{1,i}a_{2,1}|&\le&
|a_{1,1}|\sum_{i=3}^n|a_{2,i}|+|a_{2,1}|\sum_{i=3}^n|a_{1,i}| \\
&\le& |a_{1,1}|(|a_{2,2}|-|a_{2,1}|)+|a_{2,1}|(|a_{1,1}|-|a_{1,2}|) \\
&=&|a_{1,1}||a_{2,2}|-|a_{2,1}||a_{1,2}|\\
&\le& |a_{1,1}a_{2,2}-a_{2,1}a_{1,2}|
\end{eqnarray*}
The basic idea of the proof is as follows. Assume you are eliminating the
entry $a_{ik}^{k-1}$ in the $k$th step of Gaussian elimination. Hence,
$a_{ik}^k = 0$ at the end of the computation. To show that the matrix is still
diagonally dominant after the elimination, we show that the total amount that we add
to the $i$th row is less than the eliminated entry.
We start with the diagonal dominance of the $k$th row, i.e.,
$$ \sum_{j \not= k} | a_{kj}^{k-1} | < | a_{kk}^{k-1} | \,. $$
Multiplying by $\frac{| a_{ik}^{k-1} |}{| a_{kk}^{k-1} |}$ gives that
$$ \tag{a}
\sum_{j \not= k}
\frac{| a_{ik}^{k-1} | \cdot | a_{kj}^{k-1} |}
{| a_{kk}^{k-1} |} < | a_{ik}^{k-1} | \,. $$
Hence, indeed, what we add to the $i$th row is less than the entry that we
eliminate.
With this knowledge we can start the computation.
We have, because $a_{ik}^{k} = 0$, that
\begin{align}
\sum_{j \not= i} | a_{ij}^{k} |
&=
\sum_{\substack{j \not= i \\ j \not= k}} | a_{ij}^{k} |
\,.
\end{align}
Using the definition of the elimination step and then the triangle
inequality gives that
\begin{align}
\sum_{j \not= i} | a_{ij}^{k} |
&=
\sum_{\substack{j \not= i \\ j \not=k}}
| a_{ij}^{k-1} -
\frac{a_{ik}^{k-1} \cdot a_{kj}^{k-1}}{a_{kk}^{k-1}} | \\
&\le
\sum_{\substack{j \not= i \\ j \not=k}}
| a_{ij}^{k-1} | +
\sum_{\substack{j \not= i \\ j \not=k}}
| \frac{a_{ik}^{k-1} \cdot a_{kj}^{k-1}}{a_{kk}^{k-1}} |
\,.
\end{align}
From equation (a) and the fact that we leave out the $i$th term it follows
that
\begin{align}
\sum_{j \not= i} | a_{ij}^{k} |
&\le
\sum_{\substack{j \not= i \\ j \not=k}} | a_{ij}^{k-1} | +
| a_{ik}^{k-1} | -
| \frac{a_{ik}^{k-1} \cdot a_{ki}^{k-1}}{a_{kk}^{k-1}} |
\\ &=
\sum_{j \not= i} | a_{ij}^{k-1} | -
| \frac{a_{ik}^{k-1} \cdot a_{ki}^{k-1}}{a_{kk}^{k-1}} |
\,.
\end{align}
Using that the $i$th row was diagonally dominant before the elimination and
then applying the reversed triangle inequality,
\begin{align}
\sum_{j \not= i} | a_{ij}^{k} |
& \le
| a_{ii}^{k-1} | -
| \frac{a_{ik}^{k-1} \cdot a_{ki}^{k-1}}{a_{kk}^{k-1}} |
\\ &\le
| a_{ii}^{k-1} | -
| \frac{a_{ik}^{k-1} \cdot a_{ki}^{k-1}}{a_{kk}^{k-1}} |
\\ &=
| a_{ii}^{k} |
\,.
\end{align}
This inequality completes the proof.
Best Answer
Diagonal dominance can be defined by rows or by columns. In the view of what you need, i.e. to show that during the Gauss elimination process, in which you eliminate column by column, no partial pivoting is necessary, the definition of diagonal dominance by columns is more helpful. If the diagonal dominance by columns is preserved during the elimination process, no exchange of rows (i.e. the partial pivoting) would help and can be omitted.
Let us consider to have a square matrix $\textbf{A}$, of type $n\times n$, diagonally dominant by columns and let us eliminate the first column, i.e. the values $a_{21},\dots,a_{n1}$. Then we show that the submatrix $\textbf{A}'$ is diagonal dominant by columns as well. $$ ~~~\textbf{A} ~~\rightarrow~~ \left[\begin{array}{cc} a_{11} & \textbf{a} \\ \textbf{0} & \textbf{A}' \\ \end{array}\right],$$ $$ \left[\begin{array}{cccc} a_{11} & \dots & a_{1j} & \dots \dots \\% & a_{1n} \\ a_{21} & \dots & a_{2j} & \dots \dots \\% & a_{2n} \\ \vdots & & \vdots & \\% & \vdots \\ a_{j1} & \dots & a_{jj} & \dots \dots \\% & a_{jn} \\ \vdots & & \vdots & \\% & \vdots \\ a_{n1} & \dots & a_{nj} & \dots \dots \\% & a_{nn} \\ \end{array}\right] ~~\rightarrow~~ \left[\begin{array}{ccccc} a_{11} & \dots & a_{1j} & \dots \dots \\% & a_{1n} \\ 0 & \dots & a_{2j}-\frac{a_{21}}{a_{11}}a_{1j} & \dots \dots \\%& a_{2n}-\frac{a_{21}}{a_{11}}a_{1n} \\ \vdots & & \vdots & \\ 0 & \dots & a_{jj}-\frac{a_{j1}}{a_{11}}a_{1j} & \dots \dots \\%\\ & a_{jn} -\frac{a_{j1}}{a_{11}}a_{1n} \vdots & & \vdots & \\ 0 & \dots & a_{nj}-\frac{a_{n1}}{a_{11}}a_{1j} & \dots \dots %\\ & a_{nn} -\frac{a_{n1}}{a_{11}}a_{1n}\\ \end{array}\right]. $$
First of all, diagonal dominance of the first column of $\textbf{A}$ yields $$ |a_{11}|\ge \sum_{i=2}^n |a_{i1}|~~\implies~~1\ge \sum_{i=2}^n \left|\frac{a_{i1}}{a_{11}}\right|. $$ Then we look to non-diagonal entries of the $j$-th column of the $\textbf{A}'$ matrix: $$ \sum_{i=2,~i\neq j}^n \left|a_{ij}-a_{1j}\frac{a_{i1}}{a_{11}}\right| \le \sum_{i=2,~i\neq j}^n \left|a_{ij}\right|+\sum_{i=2,~i\neq j}^n \left|a_{1j}\frac{a_{i1}}{a_{11}}\right|= $$ $$ \sum_{i=1,~i\neq j}^n \left|a_{ij}\right|-|a_{1j}|+ \sum_{i=2}^n \left|a_{1j}\frac{a_{i1}}{a_{11}}\right|-\left|a_{1j}\frac{a_{j1}}{a_{11}}\right|\le_{(\text{use d.d. of the $j$-th column})} $$ $$ |a_{jj}|-|a_{1j}|+|a_{1j}|\sum_2^n \left|\frac{a_{i1}}{a_{11}}\right|-\left|a_{1j}\frac{a_{j1}}{a_{11}}\right| \le_{(\text{use d.d. of the 1-st column})} $$ $$ |a_{jj}|-\left|a_{1j}\frac{a_{j1}}{a_{11}}\right| \le \left|a_{jj}-a_{1j}\frac{a_{j1}}{a_{11}}\right|, $$ which is the diagonal term in the $j$-th column of the $\textbf{A}'$ matrix, i.e. the diagonal dominance of the $j$-th column was shown. So the $\textbf{A}'$ matrix is diagonally dominant by columns. If this procedure is used inductively we get that any submatrix in any stage of the Gauss-elimination is diagonally dominant by columns. As a consequence, the partial pivoting would be useless.
You could of course ask if full pivoting would be useful. However, first, full pivoting is out of practical use due its computational demands and second it can also be shown, that a modification of matrix entries is maximally by factor $2$, which makes the whole Gauss-elimination procedure safe. This result can be found in J.A.Tragenstein, Scientific Computing Nonlinear Computational Engineering, Springer Verlag, 2018, p.284.
Let us focus now on the question of regularity. Diagonal dominance is not sufficient to ensure regularity (consider a matrix $[1, 1; 1, 1]$). However, strict diagonal dominance is sufficient, see https://en.wikipedia.org/w/index.php?title=Diagonally_dominant_matrix#Applications_and_properties.