Gaussian Elimination helps to put a matrix in row echelon form, while Gauss-Jordan Elimination puts a matrix in reduced row echelon form. For small systems (or by hand), it is usually more convenient to use Gauss-Jordan elimination and explicitly solve for each variable represented in the matrix system. However, Gaussian elimination in itself is occasionally computationally more efficient for computers. Also, Gaussian elimination is all you need to determine the rank of a matrix (an important property of each matrix) while going through the trouble to put a matrix in reduced row echelon form is not worth it to only solve for the matrix's rank.
EDIT:
Here are some abbreviations to start off with:
REF = "Row Echelon Form". RREF = "Reduced Row Echelon Form."
In your question, you say you reduce a matrix A to a diagonal matrix where every nonzero value equals 1. For this to happen, you must perform row operations to "pivot" along each entry along the diagonal. Such row operations usually involve multiplying/dividing by nonzero scalar multiples of the row, or adding/subtracting nonzero scalar multiples of one row from another row. My interpretation of REF is just doing row operations in such a way to avoid dividing rows by their pivot values (to make the pivot become 1). If you go through each pivot (the numbers along the diagonal) and divide those rows by their leading coefficient, then you will end up in RREF. See these Khan Academy videos for worked examples.
In a system $Ax=B$, $x$ can only be solved for if $A$ is invertible. Invertible matrices have several important properties. The most useful property for your question is that their RREF is the identity matrix (a matrix with only 1's down the diagonal and 0's everywhere else). If you row-reduce a matrix and it does not become an identity matrix in RREF, then that matrix was non-invertible. Non-invertible matrices (also known as singular matrices) are not as helpful when trying to solve a system exactly.
We are given the system above and form our augmented matrix: (note: the same argument works even if we don't rewrite this as a matrix. Just follow the described row operations as operations on the equations in the system instead. Using matrices just simplifies notation a bit.)
$$\left[\begin{array}{ccc|c}2&1&1&5\\1&-1&2&1\\1&2&-1&4\end{array}\right]$$
We apply row-reduction:
First, we want to make it so that all entries in the first column are zero except for the first row, first column entry. $R_1-2R_2\mapsto R_2,~~~~ R_1-2R_3\mapsto R_3$
$$\left[\begin{array}{ccc|c}2&1&1&5\\0&3&-3&3\\0&-3&3&-3\end{array}\right]$$
Now, we notice that the second and third rows are multiples of eachother, so we can clear one out. $R_3+R_2\mapsto R_3$
$$\left[\begin{array}{ccc|c}2&1&1&5\\0&3&-3&3\\0&0&0&0\end{array}\right]$$
Let us make the pivot point of the second row a one now. $\frac{1}{3}R_2\mapsto R_2$
$$\left[\begin{array}{ccc|c}2&1&1&5\\0&1&-1&1\\0&0&0&0\end{array}\right]$$
Lets clear the rest of the column for the second pivot. $R_1-R_2\mapsto R_1$
$$\left[\begin{array}{ccc|c}2&0&2&4\\0&1&-1&1\\0&0&0&0\end{array}\right]$$
And now finally let us make the first pivot a one. $\frac{1}{2}R_1\mapsto R_1$
$$\left[\begin{array}{ccc|c}1&0&1&2\\0&1&-1&1\\0&0&0&0\end{array}\right]$$
This final form of our matrix is what we call the Reduced Row Echelon Form.
We can now reinterpret this as the system of equations:
$\begin{cases}x+z=2\\y-z=1\\z~\text{is free}\end{cases}$
So, if supposing that $z$ is some parameter $t$, you have:
$\begin{cases} x=2-t\\y=1+t\\z=t\end{cases}$
As for the question of when you know if there is exactly one solution or not, we can describe a system of linear equations as a matrix equation: $A\overrightarrow{x} = \overrightarrow{b}$, where $A$ and $\overrightarrow{b}$ are known and we want to find $\overrightarrow{x}$. A unique solution will exist if and only if $A$ is invertible, and can be found as $\overrightarrow{x}=A^{-1}\overrightarrow{b}$.
If $A$ is not invertible, then either there are infinitely many solutions or there are no solutions. Which it is depends on the specific choice of $A$ and $\overrightarrow{b}$. Checking to see if a matrix $A$ is invertible can be done by finding its determinant. A square matrix $A$ is invertible if and only if $\det(A)\neq 0$. (A non-square matrix is never invertible)
Best Answer
ALmost. Maybe the best way to see this is as follows: If $C$ is an invertible matrix that the set $S_1$ of vectors $x$ such that $Ax=b$ is the same as the set $S_2$ of vectors $x$ such that $CAx=Cb$. This follows because for each $x$ with $Ax=b$ we immediately find $C\cdot Ax=Cb$ and vice verse for each $x$ with $CAx=Cb$ we find by multiplying with $C^{-1}$ (which exists by the assumption of invertibility) $Ax=C^{-1}CAx=C^{-1}Cb=b$. Therefore we have bothe $S_1\subseteq S_2$ and $S_2\subseteq S_1$.
Now to apply trhis to Gauss elimination observe that the single steps (scaling a row, adding a row to another row, swapping rows) can be accieved by multiplying with a suitable simple matrix $C$ with an easily found inverse.