The proof in the PDF (Theorem 1.1) is very elementary. The crux of the argument is that if $M$ is strictly diagonally dominant and singular, then there exists a vector $u \neq 0$ with
$$Mu = 0.$$
$u$ has some entry $u_i > 0$ of largest magnitude. Then
\begin{align*}
\sum_j m_{ij} u_j &= 0\\
m_{ii} u_i &= -\sum_{j\neq i} m_{ij}u_j\\
m_{ii} &= -\sum_{j\neq i} \frac{u_j}{u_i}m_{ij}\\
|m_{ii}| &\leq \sum_{j\neq i} \left|\frac{u_j}{u_i}m_{ij}\right|\\
|m_{ii}| &\leq \sum_{j\neq i} |m_{ij}|,
\end{align*}
a contradiction.
I'm skeptical you will find a significantly more elementary proof. Incidentally, though, the Gershgorin circle theorem (also described in your PDF) is very beautiful and gives geometric intuition for why no eigenvalue can be zero.
You are using the definition correctly, but your absolute values for the first row are slightly off.
Assuming these are real valued matrices, for row 1, we need
$|\alpha | + |2| < |3|$
$ \implies |\alpha| + 2< 3 $
$ \implies |\alpha| < 1 $
$\implies -1 < \alpha < 1$.
For row 2, you are are correct:
$|-2| + |2| < |\beta | \implies
4 < | \beta | \implies
\beta < -4 \text{ or } 4 < \beta$.
To be clear the matrix $A(\alpha, \beta )$ is strictly diagonally dominant if and only if both of these conditions are met: $\alpha \in (-1, 1)$ and $\beta \in (-\infty, -4) \cup (4, \infty)$.
Best Answer
As LutzL stated this is false in general. Another (even more simple) example would be the zero-matrix. But for some kind of (non-strictly) diagonal-dominant matrices you can ensure they are non singular.
Take $A\in\mathbb C^{n\times n}$ with $n\ge2$ and $$\forall\, i,j :\quad\left|a_{i,i}\right|\cdot\left|a_{j,j}\right| \gt r_i(A)\cdot r_j(A)$$ (where $a_{k,k}$ is the $k$-row-diagonal-element and $r_k(A)$ the associated row-sum)
then $A$ is non-singular. The proof is similar to the proof of Gershgorins Theorem.
Note that all strictly diagonal-dominant matrices fullfil this conditions, but also those, where you have non-strictly dominance in exact one row.