As Andre says this is not true. On the other hand if your question is why is the reason for which a system (of $n$ equation and $n$ unknowns) with complete rank has always a unique solution. In my opinion the better way to see it is by linear maps. Let $n$ be fixed and select a collection of $a_{i,j}\in \mathbb{F}$ for $(i,j) \in \{\ 1, \ldots,n\} \times \{\ 1, \ldots,n\}$. Define the linear map $f$
$$(x_1 \ldots,x_n)\longmapsto \bigg(\sum_{1\le j\le n} a_{1,j}x_j \ldots\sum_{1\le j\le n} a_{n,j}x_j \bigg)$$
When we say that the rank is complete, we say that for all $(c_1 \ldots,c_n)$ always exists a $(x_1 \ldots,x_n)$ such that $f(x_1 \ldots,x_n)=(c_1 \ldots,c_n)$, i.e.,
$$\bigg(\sum_{1\le j\le n} a_{1,j}x_j \ldots\sum_{1\le j\le n} a_{n,j}x_j \bigg)=(c_1 \ldots,c_n)$$
With really means that $\sum_{1\le j\le n} a_{i,j}x_j =c_j$ for our system.
Since the $\text{rank }f=n$. Then by the dimention formula we can conclude that its kernel is trivial, i.e., $\ker f = \{0\}$. Then $f$ is an injective map. So we know that always exists a solution because is surjective (the rank has the same dimension as the target space) and also this solution is unique since is one-to-one.
Using our above notation. What happens when the rank is less than $n$. Then there exists some values in the target space which are "outside" of the image and in these cases there is no solution. Since the rank is less than $n$ so the linear map cannot be surjective and exists vectors in $\mathbb{F}^n \backslash f(\mathbb{F}^n)$.
For the question: when does it have an infinite number of solution? Assuming again that the rank is less than $n$. Let $(c_1, \ldots ,c_n) \in f(\mathbb{F^n})$ (a vector in the image), because is in the image there is a vector in the domain that under $f$ is mapped in $(c_1, \ldots ,c_n)$, let call it $x=(x_1, \ldots, x_n)$. But also we know that the kernel is not trivial, again using the dimension formula we conclude $\dim(\ker f)= n-\text{rank} f>0$. Then there is some vector $(z_1, \ldots, z_n) \not= 0$ such that $f(z_1, \ldots, z_n)= (0, \ldots, 0)$.
Now consider $(x_1,\ldots, x_n)+k(z_1, \ldots, z_n)$, where $k\in \mathbb{F}$ and see what happens under $f$.
\begin{align}f((x_1,\ldots, x_n)+k(z_1, \ldots, z_n))=f(x_1,\ldots, x_n)+kf(z_1, \ldots, z_n)\\
=f(x_1,\ldots, x_n)+0= (c_1 \ldots, c_n) \end{align}
This occurs because $f$ is linear and the vector $(z_1, \ldots, z_n)$ is in the kernel (is zero under $f$). Thus $(x_1,\ldots, x_n)+k(z_1, \ldots, z_n)$ is also a solution. And indeed the set
$$x+\ker f: = \{x+k: x=(x_1,\ldots,x_n)+k, \text{ and } k\in \ker f\}$$
by the same argument as above contain all the solutions. In that case there is a infinite number of solutions.
Hints: The determinant is:
$$k^3-3 k+2 = (k-1)^2(k+2) = 0 \implies k = 1, -2$$
We want the determinant to be nonzero for unique solutions.
If we substitute $k = 1$ in the system, and do a RREF we have:
$$\left(
\begin{array}{cccc}
1 & 1 & 1 & 1 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{array}
\right)$$
If we substitute $k = -2$ in the system, and do a RREF we have:
$$\left(
\begin{array}{cccc}
1 & 0 & -1 & 0 \\
0 & 1 & -1 & 0 \\
0 & 0 & 0 & 1 \\
\end{array}
\right)$$
Can you now derive conclusions for inconsistent, unique solution and infinite solutions?
Best Answer
Apply the Gauss-elimination algorithm:
$\begin{pmatrix} 1&1&-1&-1\\ 2&-4&-6&-1\\ 1&-1&k^2-1&k \end{pmatrix} \implies\begin{pmatrix} 1&1&-1&-1\\ 0&-6&-4&1\\ 0&-2&k^2&k+1 \end{pmatrix}\implies \begin{pmatrix} 1&1&-1&-1\\ 0&-6&-4&1\\ 0&0&k^2+4/3&k+2/3 \end{pmatrix}$
Now, since $\ k^2+\frac{4}{3}>0 \text{ } \forall k$, you can find a solution, and it is always unique.
You can get the same result if you compute the determinant of the coefficient matrix, which cannot be equal to zero:
det$\begin{pmatrix} 1&1&-1\\ 2&-4&-6\\ 1&-1&k^2-1 \end{pmatrix}=[-4(k^2-1)-6]-[2(k^2-1)+6]-[-2+4]= $
$\ =-4k^2+4-6-2k^2+2-6+2-4=-6k^2-8<0 \forall k$