As Andre says this is not true. On the other hand if your question is why is the reason for which a system (of $n$ equation and $n$ unknowns) with complete rank has always a unique solution. In my opinion the better way to see it is by linear maps. Let $n$ be fixed and select a collection of $a_{i,j}\in \mathbb{F}$ for $(i,j) \in \{\ 1, \ldots,n\} \times \{\ 1, \ldots,n\}$. Define the linear map $f$
$$(x_1 \ldots,x_n)\longmapsto \bigg(\sum_{1\le j\le n} a_{1,j}x_j \ldots\sum_{1\le j\le n} a_{n,j}x_j \bigg)$$
When we say that the rank is complete, we say that for all $(c_1 \ldots,c_n)$ always exists a $(x_1 \ldots,x_n)$ such that $f(x_1 \ldots,x_n)=(c_1 \ldots,c_n)$, i.e.,
$$\bigg(\sum_{1\le j\le n} a_{1,j}x_j \ldots\sum_{1\le j\le n} a_{n,j}x_j \bigg)=(c_1 \ldots,c_n)$$
With really means that $\sum_{1\le j\le n} a_{i,j}x_j =c_j$ for our system.
Since the $\text{rank }f=n$. Then by the dimention formula we can conclude that its kernel is trivial, i.e., $\ker f = \{0\}$. Then $f$ is an injective map. So we know that always exists a solution because is surjective (the rank has the same dimension as the target space) and also this solution is unique since is one-to-one.
Using our above notation. What happens when the rank is less than $n$. Then there exists some values in the target space which are "outside" of the image and in these cases there is no solution. Since the rank is less than $n$ so the linear map cannot be surjective and exists vectors in $\mathbb{F}^n \backslash f(\mathbb{F}^n)$.
For the question: when does it have an infinite number of solution? Assuming again that the rank is less than $n$. Let $(c_1, \ldots ,c_n) \in f(\mathbb{F^n})$ (a vector in the image), because is in the image there is a vector in the domain that under $f$ is mapped in $(c_1, \ldots ,c_n)$, let call it $x=(x_1, \ldots, x_n)$. But also we know that the kernel is not trivial, again using the dimension formula we conclude $\dim(\ker f)= n-\text{rank} f>0$. Then there is some vector $(z_1, \ldots, z_n) \not= 0$ such that $f(z_1, \ldots, z_n)= (0, \ldots, 0)$.
Now consider $(x_1,\ldots, x_n)+k(z_1, \ldots, z_n)$, where $k\in \mathbb{F}$ and see what happens under $f$.
\begin{align}f((x_1,\ldots, x_n)+k(z_1, \ldots, z_n))=f(x_1,\ldots, x_n)+kf(z_1, \ldots, z_n)\\
=f(x_1,\ldots, x_n)+0= (c_1 \ldots, c_n) \end{align}
This occurs because $f$ is linear and the vector $(z_1, \ldots, z_n)$ is in the kernel (is zero under $f$). Thus $(x_1,\ldots, x_n)+k(z_1, \ldots, z_n)$ is also a solution. And indeed the set
$$x+\ker f: = \{x+k: x=(x_1,\ldots,x_n)+k, \text{ and } k\in \ker f\}$$
by the same argument as above contain all the solutions. In that case there is a infinite number of solutions.
Best Answer
A homogeneous system of 3 linear equations in 4 unknowns always has a solution, in fact, always has a non-trivial solution, a solution where the unknowns are not all zero. A system is homogeneous if the constant terms are all zero, which is the situation you are describing in your question when you say "all of the three linear equations are equal to zero."
More generally: A homogeneous system of linear equations always has at least one solution, namely, the solution in which each unknown is zero. If the number of unknowns exceeds the number of equations then a homogeneous system is guaranteed to have infinitely many solutions (and thus solutions in which the unknowns are not all zero).