There is one fly in the ointment, which is inconsistent systems. Two inconsistent systems have the same set of solutions, but they need not be equivalent in the sense you give. They may not even be systems in the same number of variables! But even if you require that they be systems in the same number of variables, you run into trouble. Here are two systems that have the exact same solutions (to wit, none):
$$\begin{array}{rcccl}
x & + & y & = & 0;\\
x & + & y & = & 1;
\end{array}\qquad\text{and}\qquad
\begin{array}{rcccl}
x & + & 2y & = & 0;\\
x & + & 2y & = & 1.
\end{array}$$
But $x+y=0$ cannot be obtained from the second system, since any combination of the equations in the second system will give you an equation in which the coefficient of $y$ is twice the coefficient of $x$.
But if you remove this bad case, then the result is true: two consistent systems that have the same (nonempty) set of solutions are equivalent in the sense you give.
Consider first the case of homogeneous systems (which are always consistent). We can write the system as $A\mathbf{x}=\mathbf{0}$, where $A$ is the $n\times m$ coefficient matrix of your system, with $n$ equations and $m$ unknowns.
A vector $\mathbf{x}_0$ is a solution if and only if it lies in the orthogonal complement of the subspace of $\mathbb{R}^m$ spanned by the rows of $A$ (which corresponds to the equations). If $A\mathbf{x}=\mathbf{0}$ and $B\mathbf{x}=\mathbf{b}$ have the same solution set, then that means that $\mathbf{b}=\mathbf{0}$ (since the solution that assigns every variable to $0$ is a solution to the first system, hence to the second).
But that means that the row space of $A$ has the same orthogonal complement as the rowspace of $B$. In finite dimensional vector spaces, $(\mathbf{W})^{\perp\perp}=\mathbf{W}$. Thus, the row space of $A$ and the rowspace of $B$ have to be equal.
That means that every row of $B$ (every equation in the second system) is a linear combination of the rows of $A$ (the equations of the first system), and every row of $A$ (equations in the first system) is a linear combination of the rows of $B$ (equations of the second system). Thus, the two systems are equivalent.
Now, to consider the more general case of systems of the form $A\mathbf{x}=\mathbf{a}$, note that the solutions to $A\mathbf{x}=\mathbf{a}$ are of the form
$$\mathbf{s}_0 + \mathbf{n}$$
where $\mathbf{n}$ is a solution to $A\mathbf{x}=\mathbf{0}$ and $\mathbf{s}_0$ is a specific solution to $A\mathbf{x}=\mathbf{a}$. This follows from the fact all those are solutions, since
$$A(\mathbf{s}_0+\mathbf{n}) = A\mathbf{s}_0 + A\mathbf{n} = \mathbf{a} + \mathbf{0} = \mathbf{a}.$$
And, if $\mathbf{s}_1$ is any solution, then $\mathbf{s}_1 = \mathbf{s}_0 + (\mathbf{s}_1-\mathbf{s}_0)$, and $\mathbf{n}=\mathbf{s}_1 -\mathbf{s}_0$ is a solution to $A\mathbf{x}=\mathbf{0}$:
$$A(\mathbf{s}_1-\mathbf{s}_0) = A\mathbf{s}_1 - A\mathbf{s}_0 = \mathbf{a}-\mathbf{a}=\mathbf{0}.$$
Suppose that $A\mathbf{x}=\mathbf{a}$ has the same solution set as $B\mathbf{x}=\mathbf{b}$, and that both have at least one solution. Let $S_A$ be the solutions to $A\mathbf{x}=\mathbf{0}$ and let $S_B$ be the solutions to $B\mathbf{x}=\mathbf{0}$. I claim that $S_A = S_B$.
Indeed, let $\mathbf{s}_0$ be a particular solution to $A\mathbf{x}=\mathbf{a}$; then it is also a solution to $B\mathbf{x}=\mathbf{b}$ by assumption; so for every $\mathbf{n}\in S_B$, $\mathbf{s}_0+\mathbf{n}$ is a solution to $B\mathbf{x}=\mathbf{b}$, hence to $A\mathbf{x}=\mathbf{a}$, hence $\mathbf{n}\in S_A$. Thus, $S_B\subseteq S_A$, and a symmetric argument shows that $S_A\subseteq S_B$, so the two are equal.
But we know that if the solution set to $A\mathbf{x}=\mathbf{0}$ is the same as the solution set to $B\mathbf{x}=\mathbf{0}$, then the two systems are equivalent. If we take a row of $B$ and ignore the right hand side, then we can express it as a linear combination of the rows of $A$. If, taking into account the right hand side, we were to get an equation different from the equation we have in $B$, then this would tell us that the solutions to $B\mathbf{x}=\mathbf{b}$ satisfy two equations with identical left hand sides but different right hand sides; this is impossible, since we are assuming the system is consistent. Thus, the linear combination of equations of $A$ that yields the equation of $B$ will also give the same right hand side; so every equation in the second system is a linear combination of the equations in the first system; the converse argument also holds, so the two systems are equivalent.
Let's work out an easier case to get started. Suppose $ax+by=0$ and $cx+dy=0$. Think of these calculus III style. These equations say $\langle a,b \rangle \cdot \langle x,y \rangle =0$ and $\langle c,d \rangle \cdot \langle x,y \rangle =0$. This means $\langle a,b \rangle$, $\langle a,b \rangle$ are both perpendicular to $\langle x,y \rangle$. Therefore, it is geometrically clear that there exists $k \in \mathbb{R}$ such that $\langle a,b \rangle =k\langle c,d \rangle$ thus $ax_1+bx_2 = k(cx_1+dx_2)$ (which is what you want to show for two equations).
Generally, I do agree that time is better spent on structure here. Much more can be gained by using LI and matrix structure. That is after all the difference between now and 200 years ago. We have the benefit of not just thinking in equations.
Best Answer
I presume your second system was meant to be $$3x_1 + x_2 = 0\\ x_1 + x_2 = 0\ ,$$ and you want to know a systematic way of determining the coefficients $\frac{1}{3} \mbox{ and }\frac{4}{3}$ in your equation $$3x_1 + x_2 = \frac{1}{3}(x_1-x_2) + \frac{4}{3}(2x_1 + x_2)\ ,$$ and corresponding coefficients in a similar expression for your fourth equation in terms of the first two. But it's not at all clear to me why you would want to do this, and if that isn't your question, then you'll need to state it more clearly.
To obtain the coefficients in your expression, simply treat them as unknowns $\alpha_1$ and $\alpha_2$ , for which you can obtain two equations from the expression they have to satisfy. You want your third equation to be expressed as $\alpha_1$ times the first plus $\alpha_2$ times the second: $$3x_1 + x_2 = \alpha_1 (x_1-x_2) + \alpha_2(2x_1 + x_2)\ .$$ Now rearrange the equation on the left so you have $x_1$ and $x_2$ multiplied by a single coefficient: $$3x_1 + x_2 = (\alpha_1 + 2\alpha_2)\, x_1 + (-\alpha_1 + \alpha_2)\, x_2\ .$$ For this equation to be satisfied, you now want the coefficients of $x_1$ and $x_2$ on the left to have the same value as they do on the right. That is: \begin{eqnarray} 3 &=& \alpha_1 + 2\alpha_2 \ \mbox{, and}\\ 1 &=& -\alpha_1 +\alpha_2\ , \end{eqnarray} which you can now solve by the method of (Gaussian) elimination.