You last matrix is the result of the gauss-Jordan algorithm.
It contain all coeffcient from the linear equationsystem Ax=b.
Since it is not clear if the entry in last row is zero you cannot divide it to get a leading 1.
But you are right. Normally there is a leading 1 in every row.
In many engineering applications, when you solve $Ax = b$, the matrix $A \in \mathbb{R}^{N \times N}$ remains unchanged, while the right hand side vector $b$ keeps changing.
A typical example is when you are solving a partial differential equation for different forcing functions. For these different forcing functions, the meshing is usually kept the same. The matrix $A$ only depends on the mesh parameters and hence remains unchanged for the different forcing functions. However, the right hand side vector $b$ changes for each of the forcing function.
Another example is when you are solving a time dependent problem, where the unknowns evolve with time. In this case again, if the time stepping is constant across different time instants, then again the matrix $A$ remains unchanged and the only the right hand side vector $b$ changes at each time step.
The key idea behind solving using the $LU$ factorization (for that matter any factorization) is to decouple the factorization phase (usually computationally expensive) from the actual solving phase. The factorization phase only needs the matrix $A$, while the actual solving phase makes use of the factored form of $A$ and the right hand side to solve the linear system. Hence, once we have the factorization, we can make use of the factored form of $A$, to solve for different right hand sides at a relatively moderate computational cost.
The cost of factorizing the matrix $A$ into $LU$ is $\mathcal{O}(N^3)$. Once you have this factorization, the cost of solving i.e. the cost of solving $LUx = b$ is just $\mathcal{O}(N^2)$, since the cost of solving a triangular system scales as $\mathcal{O}(N^2)$.
(Note that to solve $LUx = b$, you first solve $Ly = b$ and then $Ux = y$. Solving $Ly = b$ and $Ux=y$ costs $\mathcal{O}(N^2).$)
Hence, if you have '$r$' right hand side vectors $\{b_1,b_2,\ldots,b_r\}$, once you have the $LU$ factorization of the matrix $A$, the total cost to solve $$Ax_1 = b_1, Ax_2 = b_2 , \ldots, Ax_r = b_r$$ scales as $\mathcal{O}(N^3 + rN^2)$.
On the other hand, if you do Gauss elimination separately for each right hand side vector $b_j$, then the total cost scales as $\mathcal{O}(rN^3)$, since each Gauss elimination independently costs $\mathcal{O}(N^3)$.
However, typically when people say Gauss elimination, they usually refer to $LU$ decomposition and not to the method of solving each right hand side completely independently.
Best Answer
First notice that any combination of elementary row operation(s) is invertible.
Proof:
Let $K$ be the solution set of $Ax=b$, and $K'$ of $CAx=Cb$.
$\Longrightarrow:$ Given $k\in K$, $Ak = b$, then multiply $C$ at left on both side we get $CAk=Cb$, so $k\in K'$. So $K\subseteq K'$.
$\Longleftarrow:$ Given $k'\in K'$, $CAk'=Cb$, but since $C$ is invertible, $(C^{-1}C)Ak'=(C^{-1}C)b\implies Ak'=b,$ so $k'\in K$. So $K'\subseteq K,$
which complete the proof.