As Andre says this is not true. On the other hand if your question is why is the reason for which a system (of $n$ equation and $n$ unknowns) with complete rank has always a unique solution. In my opinion the better way to see it is by linear maps. Let $n$ be fixed and select a collection of $a_{i,j}\in \mathbb{F}$ for $(i,j) \in \{\ 1, \ldots,n\} \times \{\ 1, \ldots,n\}$. Define the linear map $f$
$$(x_1 \ldots,x_n)\longmapsto \bigg(\sum_{1\le j\le n} a_{1,j}x_j \ldots\sum_{1\le j\le n} a_{n,j}x_j \bigg)$$
When we say that the rank is complete, we say that for all $(c_1 \ldots,c_n)$ always exists a $(x_1 \ldots,x_n)$ such that $f(x_1 \ldots,x_n)=(c_1 \ldots,c_n)$, i.e.,
$$\bigg(\sum_{1\le j\le n} a_{1,j}x_j \ldots\sum_{1\le j\le n} a_{n,j}x_j \bigg)=(c_1 \ldots,c_n)$$
With really means that $\sum_{1\le j\le n} a_{i,j}x_j =c_j$ for our system.
Since the $\text{rank }f=n$. Then by the dimention formula we can conclude that its kernel is trivial, i.e., $\ker f = \{0\}$. Then $f$ is an injective map. So we know that always exists a solution because is surjective (the rank has the same dimension as the target space) and also this solution is unique since is one-to-one.
Using our above notation. What happens when the rank is less than $n$. Then there exists some values in the target space which are "outside" of the image and in these cases there is no solution. Since the rank is less than $n$ so the linear map cannot be surjective and exists vectors in $\mathbb{F}^n \backslash f(\mathbb{F}^n)$.
For the question: when does it have an infinite number of solution? Assuming again that the rank is less than $n$. Let $(c_1, \ldots ,c_n) \in f(\mathbb{F^n})$ (a vector in the image), because is in the image there is a vector in the domain that under $f$ is mapped in $(c_1, \ldots ,c_n)$, let call it $x=(x_1, \ldots, x_n)$. But also we know that the kernel is not trivial, again using the dimension formula we conclude $\dim(\ker f)= n-\text{rank} f>0$. Then there is some vector $(z_1, \ldots, z_n) \not= 0$ such that $f(z_1, \ldots, z_n)= (0, \ldots, 0)$.
Now consider $(x_1,\ldots, x_n)+k(z_1, \ldots, z_n)$, where $k\in \mathbb{F}$ and see what happens under $f$.
\begin{align}f((x_1,\ldots, x_n)+k(z_1, \ldots, z_n))=f(x_1,\ldots, x_n)+kf(z_1, \ldots, z_n)\\
=f(x_1,\ldots, x_n)+0= (c_1 \ldots, c_n) \end{align}
This occurs because $f$ is linear and the vector $(z_1, \ldots, z_n)$ is in the kernel (is zero under $f$). Thus $(x_1,\ldots, x_n)+k(z_1, \ldots, z_n)$ is also a solution. And indeed the set
$$x+\ker f: = \{x+k: x=(x_1,\ldots,x_n)+k, \text{ and } k\in \ker f\}$$
by the same argument as above contain all the solutions. In that case there is a infinite number of solutions.
Best Answer
For the system $Ax=b$, we can either have a unique solution, no solution or infinitely many solutions. Since we already know two distinct solutions, the system must have infinitely many solutions $(D)$. This means the number of linearly independent rows (equations) is less than the number of unknowns.
If $A$ is a square matrix, this translates to the case when $\text{rank}(A)=\text{rank}([A|b])<n\iff$ there is at-least one zero row in the row echelon form of $A(B),\det A=0 (C),A$ is not invertible $(E)$.
Since the rest of the statements are true, $(A)$ must be false.
We know that $(A)$ is false because the rest are true, not because we understand why $(A)$ is false itself. Note that $A$ is invertible $\iff\text{rank}(A)=n\iff$ row echelon form of $A$ has no zero row.
Intuitively, to get the reduced row echelon form of $A$, we divide each row by its leading coefficient and make the rest of the entries in each column equal to $0$ through row transformations. This means the reduced row echelon form of $A$ has $n$ linearly independent rows with leading $1$s and rest of the entries $0$. Can you now see why the reduced row echelon form of any invertible matrix is the identity matrix and vice-versa?