We have to prove the following: Given any solution set $L\subset{\mathbb R}^2$ then any two homogeneous systems
$$\Sigma: \qquad a_i x+ b_i y=0 \qquad (1\leq i\leq n)$$
having the solution set $L$ can be transformed into each other by means of row operations.
The solution set $L$ can be one of the following:
(i) $\ \{0\}$,
(ii) a one-dimensional subspace $<r>$ with $r=(p,q)\ne 0$,
(iii) all of ${\mathbb R}^2$.
Ad (i): If $0$ is the only solution of $\Sigma$ then not all row vectors $c_i=(a_i,b_i)$ can be multiples of one and the same vector $c\ne0$. So there are two equations $a_1 x+b_1 y=0$, $a_2 x+ b_2 y=0$ in $\Sigma$ with linearly independent row vectors $(a_i, b_i)$, and by means of row operations one can transform these into $\Sigma_0: \ x=0, y=0$. Further row operations will remove all remaining equations from $\Sigma$. We conclude that in this case all systems $\Sigma$ are equivalent to $\Sigma_0$.
Ad (ii): The System $\Sigma$ has to contain at least one equation with $c_i=(a_i,b_i)\ne 0$. We claim that all equations with $c_i\ne 0$ are individually equivalent to $\Sigma_1: \ q x -p y=0$. So in this case any given $\Sigma$ is equivalent to $\Sigma_1$. To prove the claim we may assume $a_i\ne 0$. Now $r\in L$ implies $a_i p+ b_i q=0$, and as $r\ne 0$ we must have $q\ne 0$. This implies $b_i=-a_i p/q$, so multiplying the equation $a_i x+ b_i y=0$ by $q/a_i$ gives $\Sigma_1$.
Ad (iii): This case is trivial. All rows of $\Sigma$ are $0$.
There is one fly in the ointment, which is inconsistent systems. Two inconsistent systems have the same set of solutions, but they need not be equivalent in the sense you give. They may not even be systems in the same number of variables! But even if you require that they be systems in the same number of variables, you run into trouble. Here are two systems that have the exact same solutions (to wit, none):
$$\begin{array}{rcccl}
x & + & y & = & 0;\\
x & + & y & = & 1;
\end{array}\qquad\text{and}\qquad
\begin{array}{rcccl}
x & + & 2y & = & 0;\\
x & + & 2y & = & 1.
\end{array}$$
But $x+y=0$ cannot be obtained from the second system, since any combination of the equations in the second system will give you an equation in which the coefficient of $y$ is twice the coefficient of $x$.
But if you remove this bad case, then the result is true: two consistent systems that have the same (nonempty) set of solutions are equivalent in the sense you give.
Consider first the case of homogeneous systems (which are always consistent). We can write the system as $A\mathbf{x}=\mathbf{0}$, where $A$ is the $n\times m$ coefficient matrix of your system, with $n$ equations and $m$ unknowns.
A vector $\mathbf{x}_0$ is a solution if and only if it lies in the orthogonal complement of the subspace of $\mathbb{R}^m$ spanned by the rows of $A$ (which corresponds to the equations). If $A\mathbf{x}=\mathbf{0}$ and $B\mathbf{x}=\mathbf{b}$ have the same solution set, then that means that $\mathbf{b}=\mathbf{0}$ (since the solution that assigns every variable to $0$ is a solution to the first system, hence to the second).
But that means that the row space of $A$ has the same orthogonal complement as the rowspace of $B$. In finite dimensional vector spaces, $(\mathbf{W})^{\perp\perp}=\mathbf{W}$. Thus, the row space of $A$ and the rowspace of $B$ have to be equal.
That means that every row of $B$ (every equation in the second system) is a linear combination of the rows of $A$ (the equations of the first system), and every row of $A$ (equations in the first system) is a linear combination of the rows of $B$ (equations of the second system). Thus, the two systems are equivalent.
Now, to consider the more general case of systems of the form $A\mathbf{x}=\mathbf{a}$, note that the solutions to $A\mathbf{x}=\mathbf{a}$ are of the form
$$\mathbf{s}_0 + \mathbf{n}$$
where $\mathbf{n}$ is a solution to $A\mathbf{x}=\mathbf{0}$ and $\mathbf{s}_0$ is a specific solution to $A\mathbf{x}=\mathbf{a}$. This follows from the fact all those are solutions, since
$$A(\mathbf{s}_0+\mathbf{n}) = A\mathbf{s}_0 + A\mathbf{n} = \mathbf{a} + \mathbf{0} = \mathbf{a}.$$
And, if $\mathbf{s}_1$ is any solution, then $\mathbf{s}_1 = \mathbf{s}_0 + (\mathbf{s}_1-\mathbf{s}_0)$, and $\mathbf{n}=\mathbf{s}_1 -\mathbf{s}_0$ is a solution to $A\mathbf{x}=\mathbf{0}$:
$$A(\mathbf{s}_1-\mathbf{s}_0) = A\mathbf{s}_1 - A\mathbf{s}_0 = \mathbf{a}-\mathbf{a}=\mathbf{0}.$$
Suppose that $A\mathbf{x}=\mathbf{a}$ has the same solution set as $B\mathbf{x}=\mathbf{b}$, and that both have at least one solution. Let $S_A$ be the solutions to $A\mathbf{x}=\mathbf{0}$ and let $S_B$ be the solutions to $B\mathbf{x}=\mathbf{0}$. I claim that $S_A = S_B$.
Indeed, let $\mathbf{s}_0$ be a particular solution to $A\mathbf{x}=\mathbf{a}$; then it is also a solution to $B\mathbf{x}=\mathbf{b}$ by assumption; so for every $\mathbf{n}\in S_B$, $\mathbf{s}_0+\mathbf{n}$ is a solution to $B\mathbf{x}=\mathbf{b}$, hence to $A\mathbf{x}=\mathbf{a}$, hence $\mathbf{n}\in S_A$. Thus, $S_B\subseteq S_A$, and a symmetric argument shows that $S_A\subseteq S_B$, so the two are equal.
But we know that if the solution set to $A\mathbf{x}=\mathbf{0}$ is the same as the solution set to $B\mathbf{x}=\mathbf{0}$, then the two systems are equivalent. If we take a row of $B$ and ignore the right hand side, then we can express it as a linear combination of the rows of $A$. If, taking into account the right hand side, we were to get an equation different from the equation we have in $B$, then this would tell us that the solutions to $B\mathbf{x}=\mathbf{b}$ satisfy two equations with identical left hand sides but different right hand sides; this is impossible, since we are assuming the system is consistent. Thus, the linear combination of equations of $A$ that yields the equation of $B$ will also give the same right hand side; so every equation in the second system is a linear combination of the equations in the first system; the converse argument also holds, so the two systems are equivalent.
Best Answer
I’ll first offer the most direct answer I can think of given the stuff they explain up to this stage, and then I’ll provide what I think is the more conceptual, and honestly clearer statement.
Suppose first for simplicity that $A\to B$ by a single row operation (and say they are $m\times n$ matrices). Let’s say for concreteness the row operation is type (1), i.e multiplication of a fixed row $i$ of $A$ by a non-zero scalar $c$. Then for any vector $X$, we have \begin{align} AX= \begin{pmatrix} \sum_{j=1}^na_{1j}x_j\\ \vdots\\ \sum_{j=1}^na_{ij}x_j\\ \vdots\\ \sum_{j=1}^na_{mj}x_j \end{pmatrix}, \quad\text{and}\quad BX= \begin{pmatrix} \sum_{j=1}^na_{1j}x_j\\ \vdots\\ c\cdot\sum_{j=1}^na_{ij}x_j\\ \vdots\\ \sum_{j=1}^na_{mj}x_j \end{pmatrix}. \end{align} So, $AX=0$ obviously implies $BX=0$. Conversely, $BX=0$ and $c\neq 0$ (so that we can divide the $i^{th}$ equation in $BX=0$ by $c$) imply $AX=0$. Similarly, for a type (3) operation of swapping rows, obviously $AX=0$ if and only if $BX=0$. Likewise you can check for a type (2) operation of adding a non-zero multiple of one row to another. Therefore, if $A\to B$ by a single row operation, then $AX=0$ if and only if $BX=0$, meaning they have the same set of roots. Finally, for general row-equivalent matrices $A=A_0\to\dots\to A_f=B$, at each stage they have the same set of roots, so $A$ and $B have the same set of roots (use induction to be precise).
So, yes, row operations are done on the coefficients of the matrix, but once you write out the equation $AX=0$ and compare with the equation $BX=0$, you see that a vector $X$ satisfies one if and only if it satisfies the other. The $0$ on the RHS is very important, because as a very trivial example, consider $A,B$ to be $1\times 1$ matrices, i.e just numbers. Suppose $A=1$. Let’s perform a row operation of multiplication by a non-zero scalar $c$, so $B=c$. The solution of $AX=1$ is simply $X=1$. On the other hand, the solution to $BX=1$ is $X=\frac{1}{c}$. So you see if you mess with the RHS of the equations, you no longer get the same solutions.
I haven’t read Hoffman and Kunze, so I’m not sure if they explain it in the following way later on. But it is a theorem that $A,B$ are row-equivalent if and only if there is an invertible matrix $R$ such that $A=RB$; this is the underlying reason for why $A,B$ have the same null space.
Let’s speak more generally in terms of linear maps between vector spaces, because I find that matrices are just unnecessarily specific (and this is especially true for this forward implication). Consider two linear maps $\alpha,\beta:V\to W$, and suppose there is an isomorphism $\rho:W\to W$ such that $\alpha=\rho\circ\beta$. Then, by definition, for any $x\in V$, we have $\alpha(x)=0$ if and only if $\rho(\beta(x))=0$. Since $\rho$ is linear it means $\rho(0)=0$, and since $\rho$ is an isomorphism, we have $\rho(\beta(x))=0$ if and only if $\beta(x)=0$. Therefore, $\alpha(x)=0$ if and only if $\beta(x)=0$. In other words, $\alpha,\beta$ have the same kernel/nullspace.
Conversely, suppose $\alpha,\beta:V\to W$ have the same null space $N\subset V$. The first isomorphism theorem for linear maps (which is one of the most fundamental results, leading to things like rank nullity and so on) tells us that $\alpha$ and $\beta$ descend to canonical isomorphisms $\tilde{\alpha}:V/N\to \text{image}(\alpha)$ and $\tilde{\beta}:V/N\to \text{image}(\beta)$ such that $\alpha=\tilde{\alpha}\circ\pi$ and $\beta=\tilde{\beta}\circ\pi$, where $\pi:V\to V/N$ is the canonical projection to the quotient. These equations can be rearranged into \begin{align} \alpha&=\tilde{\alpha}\circ\pi=\tilde{\alpha}\circ\left[(\tilde{\beta})^{-1}\circ\beta\right]=\rho_0\circ\beta, \end{align} where $\rho_0=\tilde{\alpha}\circ\tilde{\beta}^{-1}:\text{image}(\beta)\to\text{image}(\alpha)$ is an isomorphism. This is already a good partial converse.
If we wish to go further and extend $\rho_0$ to be an isomorphism $\rho:W\to W$, then let us make the further assumption that $W$ is finite-dimensional. Then, we can fix complementary subspaces $K_{\alpha},K_{\beta}$ for $\text{image}(\alpha),\text{image}(\beta)$ in $W$ respectively, i.e \begin{align} W=\text{image}(\alpha)\oplus K_{\alpha},\quad\text{and}\quad W=\text{image}(\beta)\oplus K_{\beta}. \end{align} Then, by finite-dimensionality of $W$, and the fact that the images of $\alpha,\beta$ have the same dimension (since they are isomorphic), it follows by subtraction of dimensions (which we can do since it is finite) \begin{align} \dim K_{\alpha}=\dim W-\dim\text{image}(\alpha)=\dim W-\text{image}(\beta)=\dim K_{\beta}. \end{align} Thus, there is an isomorphism $\rho_1:K_{\alpha}\to K_{\beta}$. Thus, putting $\rho_0$ and $\rho_1$ together, we get an isomorphism $\rho:W\to W$ which extends $\rho_0$. Therefore, $\alpha=\rho\circ\beta$, which proves our claim.
Maybe the first isomorphism theorem and so complementary subspaces might be too abstract, so here’s a more down-to-earth explanation of this converse using bases. Assume $W$ is finite-dimensional as above, and for further simplicity assume $V$ is also finite-dimensional (we don’t really need this though… atleast if you’re like me and happy to assume every vector space has a basis). Suppose that $\alpha,\beta$ both have the same null space $N\subset V$; we wish to show that there is an isomorphism $\rho:W\to W$ such that $\alpha=\rho\circ\beta$.
Fix a basis $\{v_1,\dots, v_n\}$ for $N$, and extend it to a basis $\{v_1,\dots, v_n; u_1,\dots, u_r\}$ for $V$ ($n$ for nullity, $r$ for rank :). Then, $\alpha,\beta$ send the first $n$ vectors to $0$, while the remaining $r$ vectors are mapped to bases of their images, i.e $\{\alpha(u_1),\dots, \alpha(u_r)\}$ and $\{\beta(u_{1}),\dots,\beta(u_r)\}$ form bases for $\text{image}(\alpha)$ and $\text{image}(\beta)$ respectively. Extend these to bases for $W$ as $\{\alpha(u_1),\dots, \alpha(u_r),a_1,\dots, a_m\}$ and $\{\beta(u_1),\dots,\beta(u_r),b_1,\dots, b_m\}$. Finally, let $\rho:W\to W$ be the unique linear map which sends, for each $1\leq i\leq r$ and $1\leq j\leq m$, the vectors $\alpha(u_i)$ to $\beta(u_i)$ and $a_j$ to $b_j$. Then, since $\rho$ sends a basis to a basis, it is an isomorphism, and from the definitions we have $\alpha=\rho\circ\beta$ (these two linear maps send the vectors $v_1,\dots, v_k$ to $0$ and they are by definition equal on the vectors $u_1,\dots, u_r$, so $\alpha$ and $\rho\circ\beta$ agree on the basis $\{v_1,\dots, v_n;u_1,\dots, u_r\}$ of $V$, and thus are equal on all of $V$).
So, to summarize, yes we do have an if and only if condition, but for the ‘if’ part, I didn’t prove it using matrix language because I find it quite annoying; working more generally with linear transformations clarifies the setting much more. And regarding your final followup, I would say the ‘best’ (and also clearest) one can do is that the images of $\alpha,\beta$ are isomorphic (and in fact we have a canonical isomorphism $\rho_0$ as described above). Also, you may wish to think about the following set-theoretic property:
In this statement, we don’t need any linearity assumptions. Anyway, in linear algebra, one place this might come up is if $\rho$ is an isomorphism which maps a certain subspace isomorphically onto itself (for example a rotation about the $z$-axis maps the $x$-$y$ plane bijectively onto itself).