As Andre says this is not true. On the other hand if your question is why is the reason for which a system (of $n$ equation and $n$ unknowns) with complete rank has always a unique solution. In my opinion the better way to see it is by linear maps. Let $n$ be fixed and select a collection of $a_{i,j}\in \mathbb{F}$ for $(i,j) \in \{\ 1, \ldots,n\} \times \{\ 1, \ldots,n\}$. Define the linear map $f$
$$(x_1 \ldots,x_n)\longmapsto \bigg(\sum_{1\le j\le n} a_{1,j}x_j \ldots\sum_{1\le j\le n} a_{n,j}x_j \bigg)$$
When we say that the rank is complete, we say that for all $(c_1 \ldots,c_n)$ always exists a $(x_1 \ldots,x_n)$ such that $f(x_1 \ldots,x_n)=(c_1 \ldots,c_n)$, i.e.,
$$\bigg(\sum_{1\le j\le n} a_{1,j}x_j \ldots\sum_{1\le j\le n} a_{n,j}x_j \bigg)=(c_1 \ldots,c_n)$$
With really means that $\sum_{1\le j\le n} a_{i,j}x_j =c_j$ for our system.
Since the $\text{rank }f=n$. Then by the dimention formula we can conclude that its kernel is trivial, i.e., $\ker f = \{0\}$. Then $f$ is an injective map. So we know that always exists a solution because is surjective (the rank has the same dimension as the target space) and also this solution is unique since is one-to-one.
Using our above notation. What happens when the rank is less than $n$. Then there exists some values in the target space which are "outside" of the image and in these cases there is no solution. Since the rank is less than $n$ so the linear map cannot be surjective and exists vectors in $\mathbb{F}^n \backslash f(\mathbb{F}^n)$.
For the question: when does it have an infinite number of solution? Assuming again that the rank is less than $n$. Let $(c_1, \ldots ,c_n) \in f(\mathbb{F^n})$ (a vector in the image), because is in the image there is a vector in the domain that under $f$ is mapped in $(c_1, \ldots ,c_n)$, let call it $x=(x_1, \ldots, x_n)$. But also we know that the kernel is not trivial, again using the dimension formula we conclude $\dim(\ker f)= n-\text{rank} f>0$. Then there is some vector $(z_1, \ldots, z_n) \not= 0$ such that $f(z_1, \ldots, z_n)= (0, \ldots, 0)$.
Now consider $(x_1,\ldots, x_n)+k(z_1, \ldots, z_n)$, where $k\in \mathbb{F}$ and see what happens under $f$.
\begin{align}f((x_1,\ldots, x_n)+k(z_1, \ldots, z_n))=f(x_1,\ldots, x_n)+kf(z_1, \ldots, z_n)\\
=f(x_1,\ldots, x_n)+0= (c_1 \ldots, c_n) \end{align}
This occurs because $f$ is linear and the vector $(z_1, \ldots, z_n)$ is in the kernel (is zero under $f$). Thus $(x_1,\ldots, x_n)+k(z_1, \ldots, z_n)$ is also a solution. And indeed the set
$$x+\ker f: = \{x+k: x=(x_1,\ldots,x_n)+k, \text{ and } k\in \ker f\}$$
by the same argument as above contain all the solutions. In that case there is a infinite number of solutions.
When the matrix is in echelon form, for each non-zero row $R_i$, you can divide the row by its leading non-zero value. that makes the leading value $1$. Next for each row $R_n$ above $R_i$, you can subtract $R_i$ multiplied by the entry in row $R_n$ in the same column as that leading $1$ from $R_n$. This results in $R_n$ having $0$ in that column. If you follow this procedure starting with the first row and going down, by the time you are done, the matrix will be reduced echelon form, and guess what! Those leading $1$s that define the pivot points are exactly the locations of the leading non-zero values in each row back before it was reduced.
I.e., When a matrix is in echelon form, the pivot points are exactly the leading non-zero values in each row. Quite frankly, if I had written the definition, that's how I would have defined it, since the two are equivalent, and you need to know them before you get in reduced echelon form.
For example, in your matrix, I marked the leading non-zero entries in red:
$$\begin{bmatrix}
\color{red}1 &4 &5 &-9 &7 \\
0 &\color{red}2 &4 &-6 &-6 \\
0 &0 &0 &\color{red}{-5} &0 \\
0 &0 &0 &0 &0 \\
\end{bmatrix}$$
First divide each row by its leading non-zero value
$$\begin{bmatrix}
\color{red}1 &4 &5 &-9 &7 \\
0 &\color{red}1 &2 &-3 &-3 \\
0 &0 &0 &\color{red}1 &0 \\
0 &0 &0 &0 &0 \\
\end{bmatrix}$$
Subtract $4$ times Row 2 from Row 1:
$$\begin{bmatrix}
\color{red}1 &0 &-3 &3 &19 \\
0 &\color{red}1 &2 &-3 &-3 \\
0 &0 &0 &\color{red}1 &0 \\
0 &0 &0 &0 &0 \\
\end{bmatrix}$$
Subtract $3$ times row 3 from row 1, and add 3 times row 3 to row 2:
$$\begin{bmatrix}
\color{red}1 &0 &-3 &0 &19 \\
0 &\color{red}1 &2 &0 &-3 \\
0 &0 &0 &\color{red}1 &0 \\
0 &0 &0 &0 &0 \\
\end{bmatrix}$$
And now, we are in reduced echelon form. See that the the pivot points from the definition are in the same locations as the echelon from leading non-zero values.
Best Answer
Suppose you have a matrix like $$\begin{pmatrix} x & 1 \\ 1 & x \end{pmatrix}$$ If $x \neq 0$ then your pivot is in the first row but if $x = 0$ then the pivot is instead in the second row.
Also notice that if $x = 1$ then the matrix has rank 1 and if $x = 0$ then the matrix has rank $2$. So based on this observation we expect there to be something preventing us from row reducing to the identity matrix because if $x = 1$ that would be impossible.
I disagree a little with how the textbook is worded. You can still do some row reduction, you just can't divide by $0$ or by any quantity (like $x$) with the possibility of being $0$.
So here what I would do is subtract $x$ times the second row from the first row to get $$\begin{pmatrix} 0 & 1 - x^2 \\ 1 & x \end{pmatrix}.$$ Then swap the rows to get $$\begin{pmatrix} 1 & x \\ 0 & 1 - x^2 \end{pmatrix}.$$ Now we can see that if $1 - x^2 = 0$ then the matrix has rank $1$ and otherwise the matrix has rank $2$.
Moreover, we know the determinant is $-(1 - x^2)$ since we did one row operation which did not affect the determinant and a second which multiplied it by $-1$.