When the matrix is in echelon form, for each non-zero row $R_i$, you can divide the row by its leading non-zero value. that makes the leading value $1$. Next for each row $R_n$ above $R_i$, you can subtract $R_i$ multiplied by the entry in row $R_n$ in the same column as that leading $1$ from $R_n$. This results in $R_n$ having $0$ in that column. If you follow this procedure starting with the first row and going down, by the time you are done, the matrix will be reduced echelon form, and guess what! Those leading $1$s that define the pivot points are exactly the locations of the leading non-zero values in each row back before it was reduced.
I.e., When a matrix is in echelon form, the pivot points are exactly the leading non-zero values in each row. Quite frankly, if I had written the definition, that's how I would have defined it, since the two are equivalent, and you need to know them before you get in reduced echelon form.
For example, in your matrix, I marked the leading non-zero entries in red:
$$\begin{bmatrix}
\color{red}1 &4 &5 &-9 &7 \\
0 &\color{red}2 &4 &-6 &-6 \\
0 &0 &0 &\color{red}{-5} &0 \\
0 &0 &0 &0 &0 \\
\end{bmatrix}$$
First divide each row by its leading non-zero value
$$\begin{bmatrix}
\color{red}1 &4 &5 &-9 &7 \\
0 &\color{red}1 &2 &-3 &-3 \\
0 &0 &0 &\color{red}1 &0 \\
0 &0 &0 &0 &0 \\
\end{bmatrix}$$
Subtract $4$ times Row 2 from Row 1:
$$\begin{bmatrix}
\color{red}1 &0 &-3 &3 &19 \\
0 &\color{red}1 &2 &-3 &-3 \\
0 &0 &0 &\color{red}1 &0 \\
0 &0 &0 &0 &0 \\
\end{bmatrix}$$
Subtract $3$ times row 3 from row 1, and add 3 times row 3 to row 2:
$$\begin{bmatrix}
\color{red}1 &0 &-3 &0 &19 \\
0 &\color{red}1 &2 &0 &-3 \\
0 &0 &0 &\color{red}1 &0 \\
0 &0 &0 &0 &0 \\
\end{bmatrix}$$
And now, we are in reduced echelon form. See that the the pivot points from the definition are in the same locations as the echelon from leading non-zero values.
The row echelon form is useful under many respects. One of them is that the operations leading to it don't change linear relations between the columns of the matrices.
More precisely, if $A$ and $B$ are row equivalent matrices (one can be obtained by the other via elementary row operations) and their columns are $a_1,\dots,a_n$ and $b_1,\dots,b_n$ respectively, then a relation $\alpha_1a_{i_1}+\dots+\alpha_ka_{i_k}=0$ holds if and only if $\alpha_1b_{i_1}+\dots+\alpha_kb_{i_k}=0$ holds.
In other words, a set of columns of $A$ is linearly dependent if and only if the corresponding set of column of $B$ is linearly dependent.
Consider now a row echelon form $B$ of the matrix $A$. The pivot columns in $B$ are obviously linearly independent and each nonpivot column is a linear combination of the pivot columns (using only the pivot columns at the left of it). Therefore the same is true for the corresponding columns of $A$.
Hence one can find a basis of the column space of $A$ by considering the columns corresponding to the pivot columns in a row echelon form.
If $B$ is the reduced row echelon form, then the entries in a nonpivot column provide the coefficients for expressing that column as a linear combination of the pivot columns, and the same coefficients express the corresponding column of $A$ as a linear combination of the basis columns chosen as above.
By the way, this proves that the reduced row echelon form is unique.
Of course, the reduced row echelon form is particularly useful for solving linear systems (which is the reason why it was invented).
Best Answer
The above is a matrix in row echelon form. However, you have the added restriction of all entries being 1 or 0. Also, you have 4 columns, 3 rows, and each row has a leading entry.
The fact that there's a leading entry in each row means that, the matrix must contain the following columns, ordered this way, from left to right:
$$\begin{bmatrix} 1 & \ast & \ast \\ 0 & 1 & \ast \\ 0& 0& 1 \\ \end{bmatrix}$$
However, you're not done yet. You are missing a column, whose form depends on where it is inserted. There are three cases.
Can you continue?