We have to prove the following: Given any solution set $L\subset{\mathbb R}^2$ then any two homogeneous systems
$$\Sigma: \qquad a_i x+ b_i y=0 \qquad (1\leq i\leq n)$$
having the solution set $L$ can be transformed into each other by means of row operations.
The solution set $L$ can be one of the following:
(i) $\ \{0\}$,
(ii) a one-dimensional subspace $<r>$ with $r=(p,q)\ne 0$,
(iii) all of ${\mathbb R}^2$.
Ad (i): If $0$ is the only solution of $\Sigma$ then not all row vectors $c_i=(a_i,b_i)$ can be multiples of one and the same vector $c\ne0$. So there are two equations $a_1 x+b_1 y=0$, $a_2 x+ b_2 y=0$ in $\Sigma$ with linearly independent row vectors $(a_i, b_i)$, and by means of row operations one can transform these into $\Sigma_0: \ x=0, y=0$. Further row operations will remove all remaining equations from $\Sigma$. We conclude that in this case all systems $\Sigma$ are equivalent to $\Sigma_0$.
Ad (ii): The System $\Sigma$ has to contain at least one equation with $c_i=(a_i,b_i)\ne 0$. We claim that all equations with $c_i\ne 0$ are individually equivalent to $\Sigma_1: \ q x -p y=0$. So in this case any given $\Sigma$ is equivalent to $\Sigma_1$. To prove the claim we may assume $a_i\ne 0$. Now $r\in L$ implies $a_i p+ b_i q=0$, and as $r\ne 0$ we must have $q\ne 0$. This implies $b_i=-a_i p/q$, so multiplying the equation $a_i x+ b_i y=0$ by $q/a_i$ gives $\Sigma_1$.
Ad (iii): This case is trivial. All rows of $\Sigma$ are $0$.
You went wrong in that you solved the system and considered that the answer to the question. Both of these systems have infinite solutions but that is not what the question is asking. It is asking are the systems equivalent. One way to do that is solve them both and see that they give the same solution set. Another way to to transform one set of equations into the other using algebra.
$\begin{align}
-x_1+x_2+4x_3 &= 0 \\
+x_1+3x_2+8x_3 &= 0 \\
\hline
4x_2+12x_3 &= 0
\end{align}
$
Divide the last equation by $4$ and you get $x_2+3x_3=0$, which can also be viewed as $x_3=-3x_3$. Substituting this into one of the equations (say the second), we get $x_1+3x_2+8x_3=0 \iff x_1+3(-3x_3)+8x_3=0 \iff x_1-x_3=0$.
Sine we can transform the one system into the other, they are equivalent. This is often done using the augmented matrix version of an equation and Gaussian elimination (also called Gauss-Jordan elimination).
In response to the comment, label the first system's equations $a,b,c$, in order, and the second system's equations $d,e$, in order. The work above shows that $e=\frac{1}{4}a+\frac{1}{4}b$. Since we used $a$ and $b$ to get $e$, we have to pick a different pair to try and get $d$. It turns out that $d = \frac{2}{3}c-\frac{2}{3}a$ (though this is not the only possible answer).
The other direction is much simpler. Since $d$ has $x_1$ but not $x_2$, any $x_1$ terms in $a,b,c$ have to have come from $d$. The same is true for $e$ and $x_2$. Since $a$ has $-x_1$, we need a $-d$. Since $a$ has $x_2$, we need $e$. It turns out that this is enough to get the answer as the $x_3$ takes care of itself. So $a=e-d$. Similarly, $b=d+3e$ and $c=\frac{1}{2}d+e$.
Best Answer
One of the most frequent occasions where linear systems of $n$ equations in $n$ unknowns arise is in least-squares optimization problems. Let us look at an example. Let's say that we are studying two physical quantities $y$ and $x$ and we conjecture that $y$ is a second order polynomial function of $x$, i.e. $y=\alpha x^2 + \beta x + \gamma$ for some real numbers $\alpha$, $\beta$, $\gamma$ that are unknown. Let's say now that we perform experiments and obtain measurements $(x_1,y_1) \cdots (x_{100},y_{100})$. Applying the polynomial model on the measurements yields $y_i=\alpha x_i^2 + \beta x_i + \gamma$ for $i=1, \cdots 100$ or in matrix form $X k=y$ where $k=[\alpha \, \, \beta \, \, \gamma]^T$, $y=[y_1 \cdots y_{100}]^T$ and the $i^{th}$ row of $X$ is the row vector $[x_i^2 \, \, x_i \, \, 1]$. Now, as you might observe, we have $100$ equations in $3$ unknowns, i.e. our linear system $X k=y$ is overdetermined. Practically speaking, this system is consistent (i.e. it has a solution) only if indeed $y$ is related to $x$ via a second order polynomial equation (i.e. our conjecture is true) and additionally there is no noise in our measurements. So assume that none of the above two conditions is true. Hence the system $X k=y$ will not in general have a solution and one might consider finding a vector $k$ that instead minimizes $||X k - y||_2^2$, i.e. the square of the error. Then the solution of this optimization problem is the solution to the $3 \times 3$ system $X^T X k = X^T y$. This formulation comes up all the time in engineering, e.g. in signal prediction. So, least squares problems lead to square (i.e. $n \times n$) linear systems of equations.