The solution of the equation
$$
x'(t) = A x(t)
$$
is
$$
x(t) = e^{t A } x(0)
$$
wether or not $A$ is diagonalizable. This can be easily seen differentiating the solution. (Added: I give a proof below). Note that $e^{tA}$ and $A$ commute with each other so there is no problem with that.
The Jordan canonical form is a way to compute the matrix exponential. But that is not needed here.
If there is an invertible matrix $V$ such that
$$
A = V D V^{-1}
$$
then
$$
(t A)^n = t^n V D^n V^{-1}
$$
and so
$$
\exp{(tA) } = V e^{tD} V^{-1}.
$$
Note $D$ can be whatever for this to hold. The point is that when $D$ is in Jordan block form the computation of $e^{tD}$ is greatly simplified.
Added
I will show here that the exponential solves the differential equation. In fact I will show something slightly more general, namely:
Let $B(t)$ be a differentiable matrix valued function. Provided, $B'(t)$ commutes with $B(t)$ (for all $t$ in the needed interval) then we have
$$
\frac{d}{dt} e^{B(t)} = B'(t) e^{B(t)}
$$
Proof. -
\begin{align}
\frac{d}{dt} [B(t)]^n &= B'(t) [B(t)]^{n-1} + B(t) B'(t) [B(t)]^{n-2} +\ldots \\
& + \ldots [B(t)]^{n-1} B'(t) \\
&= n B'(t) [B(t)]^{n-1}
\end{align}
where the last equality stems from commutativity. Hence
\begin{align}
\frac{d}{dt} e^{B(t)} &= \frac{d}{dt}\sum_{n=0}^{\infty} \frac{[B(t)]^n}{n!} \\
&= \sum_{n=0}^{\infty} \frac{n B'(t) [B(t)]^{n-1}}{n!} \\
&= B'(t) \sum_{n=1}^{\infty} \frac{[B(t)]^{n-1}}{(n-1)!} \\
&= B'(t) e^{B(t)}.
\end{align}
Now to prove the claim simply consider $B(t) = t A$, $\square.$
For the first reduction, what we have is the vector $^{(1)}$ such that :
$$
\mathbf{u^{(1)}}=\begin{bmatrix}0&\displaystyle\frac{x_{2}}{x_{1}}&\cdots&\displaystyle\frac{x_{n}}{x_{1}}\end{bmatrix}
$$
Therefore :
$$
\mathbf{M^{(1)}}=
\begin{bmatrix}
1 & 0 & 0 & \cdots & 0 & 0 \\
-a_{2,1} / a_{1,1} & 1 & 0 & \cdots & 0 & 0 \\
\vdots & 0 & \ddots & \ddots & 0 & 0 \\
-a_{j, 1} / a_{1,1} & 0 & 0 & 1 & 0 & \vdots \\
\vdots & 0 & 0 & \ddots & \ddots & 0 \\
-a_{n, 1} / a_{1,1} & 0 & 0 & \cdots & 0 & 1
\end{bmatrix}
$$
The main key to perform the product of the two matrices $M^{(1)}$ and $A$ is to divide the two matrices into blocks because you will see that for the first, second, ..., last reduction, there will be some operations repeated at every reduction
\begin{align*}
\mathbf{M^{(1)}A}&=\left[\begin{array}{c|ccccc}
1 & 0 & 0 & \cdots & 0 & 0 \\
\hline-a_{2,1} / a_{1,1} & 1 & 0 & \cdots & 0 & 0 \\
\vdots & 0 & \ddots & \ddots & 0 & 0 \\
-a_{j, 1} / a_{1,1} & 0 & 0 & 1 & 0 & \vdots \\
\vdots & 0 & 0 & \ddots & \ddots & 0 \\
-a_{n, 1} / a_{1,1} & 0 & 0 & \cdots & 0 & 1
\end{array}\right]
\left[\begin{array}{c|ccccc}
a_{1,1} & a_{1,2} & \cdots & a_{1, j} & \cdots & a_{1, n} \\[0.05cm]
\hline a_{2,1} & a_{2,2} & \cdots & a_{2, j} & \cdots & a_{2, n} \\[0.05cm]
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\[0.05cm]
a_{i, 1} & a_{i, 2} & \cdots & a_{i, j} & \cdots & a_{i, n} \\[0.05cm]
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\[0.05cm]
a_{n, 1} & a_{n, 2} & \cdots & a_{n, j} & \cdots & a_{n, n}
\end{array}\right]\\\\
&=
\left[\begin{array}{c|c}
\mathbf{A^{(1)}}(1,:)\\
& \mathbf{A^{(1)}}(2:n,2:n)\\
\mathbf{A^{(1)}}(2:n,1)
\end{array}\right] \\ \\
&=\mathbf{A^{(1)}}
\end{align*}
By subdividing the matrices into block, one can see that to obtain the first row of $A^{(1)}$, then all you have to do is multiply the first row of $M^{(1)}$ with $A$ which happens to be the first row of $A$ preserved i.e.,
\begin{align}
A^{(1)}(1,:)&=M^{(1)}(1,:)*A\\
&=e_{1}^{\intercal}A\\
&=A(1,:)
\end{align}
Moreover, the first column of $A^{(1)}$ should be $0$ for all entries below the pivot $A^{(1)}(1,1)$ which is what this operation is meant to achieve in the first reduction. This can be seen since multiplying $M^{(1)}$ with first column of $A$ should result in cancellation of the entries. For instance :
$$
\begin{bmatrix}
-a_{2,1}/a_{1,1}&1
\end{bmatrix}
\begin{bmatrix}
a_{1,1}\\
a_{2,1}
\end{bmatrix}
=-a_{2,1}+a_{2,1}=0
$$
Therefore we may express the second block as :
\begin{align}
A^{(1)}(2:n,1)&=M(2:n,:)*A(:,1)\\
&=\operatorname{zeros}(n-1,1)
\end{align}
Similar observation allow us to express the third block as follow :
\begin{align}
M^{(1)}(2: n,:) * A(:, 2: n)&=M^{(1)}(2: n, 1) * A(1,2: n)+M^{(1)}(2: n, 2: n) * A(2: n, 2: n) \\
&=A(2: n, 2: n)+M^{(1)}(2: n, 1) * A(1,2: n)
\end{align}
[Example]
Consider the following matrix :
$$
\mathbf{A}:=\left[\begin{array}{llll}1 & -1 & 2 & 1 \\ 3 & 2 & 1 & 4 \\ 5 & 8 & 6 & 3 \\ 4 & 2 & 5 & 3\end{array}\right]
$$
We have that :$$
\mathbf{u^{(1)}}=\begin{bmatrix}0&\frac{A(2:4,1)}{A(1,1)}\end{bmatrix}=\begin{bmatrix}0&3&5&4\end{bmatrix}
$$
$$
\mathbf{A^{(1)}}=\mathbf{M^{(1)} A}=\begin{bmatrix}
1 & 0 & 0 & 0 \\
-3 & 1 & 0 & 0 \\
-5 & 0 & 1 & 0 \\
-4 & 0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
1 & -1 & 2 & 1 \\
3 & 2 & 1 & 4 \\
5 & 8 & 6 & 3 \\
4 & 2 & 5 & 3
\end{bmatrix}
=
\begin{bmatrix}
1 & -1 & 2 & 1 \\
0 & 5 & -5 & 1 \\
0 & 13 & -4 & -2 \\
0 & 6 & -3 & -1
\end{bmatrix}
$$
as mentioned earlier, the first row is preserved, the first column below $A^{(1)}(1,1)$ have their entries wiped out and the rest of the entries are computed through vector-vector multiplication between rows of $M^{(1)}$ and columns of $A$.
Best Answer
If your matrix is $n\times n$ you have the following operations
The number of divisions is clear because you have $n$ rows. The number of sums and multiplications are $\sum_{i=1}^{n-1}n^2=\frac{n^2-n}{2}$. It comes from $\sum_{j=1+1}^{n}u_{ij}x_j$.
The total number of operations is $2n^2-n=\mathcal{O}(n^2)$
I hope it solves your doubts