We have to prove the following: Given any solution set $L\subset{\mathbb R}^2$ then any two homogeneous systems
$$\Sigma: \qquad a_i x+ b_i y=0 \qquad (1\leq i\leq n)$$
having the solution set $L$ can be transformed into each other by means of row operations.
The solution set $L$ can be one of the following:
(i) $\ \{0\}$,
(ii) a one-dimensional subspace $<r>$ with $r=(p,q)\ne 0$,
(iii) all of ${\mathbb R}^2$.
Ad (i): If $0$ is the only solution of $\Sigma$ then not all row vectors $c_i=(a_i,b_i)$ can be multiples of one and the same vector $c\ne0$. So there are two equations $a_1 x+b_1 y=0$, $a_2 x+ b_2 y=0$ in $\Sigma$ with linearly independent row vectors $(a_i, b_i)$, and by means of row operations one can transform these into $\Sigma_0: \ x=0, y=0$. Further row operations will remove all remaining equations from $\Sigma$. We conclude that in this case all systems $\Sigma$ are equivalent to $\Sigma_0$.
Ad (ii): The System $\Sigma$ has to contain at least one equation with $c_i=(a_i,b_i)\ne 0$. We claim that all equations with $c_i\ne 0$ are individually equivalent to $\Sigma_1: \ q x -p y=0$. So in this case any given $\Sigma$ is equivalent to $\Sigma_1$. To prove the claim we may assume $a_i\ne 0$. Now $r\in L$ implies $a_i p+ b_i q=0$, and as $r\ne 0$ we must have $q\ne 0$. This implies $b_i=-a_i p/q$, so multiplying the equation $a_i x+ b_i y=0$ by $q/a_i$ gives $\Sigma_1$.
Ad (iii): This case is trivial. All rows of $\Sigma$ are $0$.
You went wrong in that you solved the system and considered that the answer to the question. Both of these systems have infinite solutions but that is not what the question is asking. It is asking are the systems equivalent. One way to do that is solve them both and see that they give the same solution set. Another way to to transform one set of equations into the other using algebra.
$\begin{align}
-x_1+x_2+4x_3 &= 0 \\
+x_1+3x_2+8x_3 &= 0 \\
\hline
4x_2+12x_3 &= 0
\end{align}
$
Divide the last equation by $4$ and you get $x_2+3x_3=0$, which can also be viewed as $x_3=-3x_3$. Substituting this into one of the equations (say the second), we get $x_1+3x_2+8x_3=0 \iff x_1+3(-3x_3)+8x_3=0 \iff x_1-x_3=0$.
Sine we can transform the one system into the other, they are equivalent. This is often done using the augmented matrix version of an equation and Gaussian elimination (also called Gauss-Jordan elimination).
In response to the comment, label the first system's equations $a,b,c$, in order, and the second system's equations $d,e$, in order. The work above shows that $e=\frac{1}{4}a+\frac{1}{4}b$. Since we used $a$ and $b$ to get $e$, we have to pick a different pair to try and get $d$. It turns out that $d = \frac{2}{3}c-\frac{2}{3}a$ (though this is not the only possible answer).
The other direction is much simpler. Since $d$ has $x_1$ but not $x_2$, any $x_1$ terms in $a,b,c$ have to have come from $d$. The same is true for $e$ and $x_2$. Since $a$ has $-x_1$, we need a $-d$. Since $a$ has $x_2$, we need $e$. It turns out that this is enough to get the answer as the $x_3$ takes care of itself. So $a=e-d$. Similarly, $b=d+3e$ and $c=\frac{1}{2}d+e$.
Best Answer
There is one fly in the ointment, which is inconsistent systems. Two inconsistent systems have the same set of solutions, but they need not be equivalent in the sense you give. They may not even be systems in the same number of variables! But even if you require that they be systems in the same number of variables, you run into trouble. Here are two systems that have the exact same solutions (to wit, none): $$\begin{array}{rcccl} x & + & y & = & 0;\\ x & + & y & = & 1; \end{array}\qquad\text{and}\qquad \begin{array}{rcccl} x & + & 2y & = & 0;\\ x & + & 2y & = & 1. \end{array}$$ But $x+y=0$ cannot be obtained from the second system, since any combination of the equations in the second system will give you an equation in which the coefficient of $y$ is twice the coefficient of $x$.
But if you remove this bad case, then the result is true: two consistent systems that have the same (nonempty) set of solutions are equivalent in the sense you give.
Consider first the case of homogeneous systems (which are always consistent). We can write the system as $A\mathbf{x}=\mathbf{0}$, where $A$ is the $n\times m$ coefficient matrix of your system, with $n$ equations and $m$ unknowns.
A vector $\mathbf{x}_0$ is a solution if and only if it lies in the orthogonal complement of the subspace of $\mathbb{R}^m$ spanned by the rows of $A$ (which corresponds to the equations). If $A\mathbf{x}=\mathbf{0}$ and $B\mathbf{x}=\mathbf{b}$ have the same solution set, then that means that $\mathbf{b}=\mathbf{0}$ (since the solution that assigns every variable to $0$ is a solution to the first system, hence to the second).
But that means that the row space of $A$ has the same orthogonal complement as the rowspace of $B$. In finite dimensional vector spaces, $(\mathbf{W})^{\perp\perp}=\mathbf{W}$. Thus, the row space of $A$ and the rowspace of $B$ have to be equal.
That means that every row of $B$ (every equation in the second system) is a linear combination of the rows of $A$ (the equations of the first system), and every row of $A$ (equations in the first system) is a linear combination of the rows of $B$ (equations of the second system). Thus, the two systems are equivalent.
Now, to consider the more general case of systems of the form $A\mathbf{x}=\mathbf{a}$, note that the solutions to $A\mathbf{x}=\mathbf{a}$ are of the form $$\mathbf{s}_0 + \mathbf{n}$$ where $\mathbf{n}$ is a solution to $A\mathbf{x}=\mathbf{0}$ and $\mathbf{s}_0$ is a specific solution to $A\mathbf{x}=\mathbf{a}$. This follows from the fact all those are solutions, since $$A(\mathbf{s}_0+\mathbf{n}) = A\mathbf{s}_0 + A\mathbf{n} = \mathbf{a} + \mathbf{0} = \mathbf{a}.$$ And, if $\mathbf{s}_1$ is any solution, then $\mathbf{s}_1 = \mathbf{s}_0 + (\mathbf{s}_1-\mathbf{s}_0)$, and $\mathbf{n}=\mathbf{s}_1 -\mathbf{s}_0$ is a solution to $A\mathbf{x}=\mathbf{0}$: $$A(\mathbf{s}_1-\mathbf{s}_0) = A\mathbf{s}_1 - A\mathbf{s}_0 = \mathbf{a}-\mathbf{a}=\mathbf{0}.$$
Suppose that $A\mathbf{x}=\mathbf{a}$ has the same solution set as $B\mathbf{x}=\mathbf{b}$, and that both have at least one solution. Let $S_A$ be the solutions to $A\mathbf{x}=\mathbf{0}$ and let $S_B$ be the solutions to $B\mathbf{x}=\mathbf{0}$. I claim that $S_A = S_B$.
Indeed, let $\mathbf{s}_0$ be a particular solution to $A\mathbf{x}=\mathbf{a}$; then it is also a solution to $B\mathbf{x}=\mathbf{b}$ by assumption; so for every $\mathbf{n}\in S_B$, $\mathbf{s}_0+\mathbf{n}$ is a solution to $B\mathbf{x}=\mathbf{b}$, hence to $A\mathbf{x}=\mathbf{a}$, hence $\mathbf{n}\in S_A$. Thus, $S_B\subseteq S_A$, and a symmetric argument shows that $S_A\subseteq S_B$, so the two are equal.
But we know that if the solution set to $A\mathbf{x}=\mathbf{0}$ is the same as the solution set to $B\mathbf{x}=\mathbf{0}$, then the two systems are equivalent. If we take a row of $B$ and ignore the right hand side, then we can express it as a linear combination of the rows of $A$. If, taking into account the right hand side, we were to get an equation different from the equation we have in $B$, then this would tell us that the solutions to $B\mathbf{x}=\mathbf{b}$ satisfy two equations with identical left hand sides but different right hand sides; this is impossible, since we are assuming the system is consistent. Thus, the linear combination of equations of $A$ that yields the equation of $B$ will also give the same right hand side; so every equation in the second system is a linear combination of the equations in the first system; the converse argument also holds, so the two systems are equivalent.