Linear Algebra – Converse for Theorem on Equivalent Systems of Linear Equations

linear algebra

On page $5$ of Linear Algebra from Hoffman and Kunze's book they prove the following Theorem:

Theorem 1. Equivalent systems of linear equations have exactly the same solutions

I will add the definition of equivalent system of linear equations.

Two systems of linear equations are equivalent if each equation in
each system is a linear combination of the equations in the other
system.

I was wondering if the converse is true. The converse appears to be true, but maybe I am missing something. All examples that I found (systems having exactly the same solutions), the systems were equivalent.

I think I am missing something trivial!

Best Answer

There is one fly in the ointment, which is inconsistent systems. Two inconsistent systems have the same set of solutions, but they need not be equivalent in the sense you give. They may not even be systems in the same number of variables! But even if you require that they be systems in the same number of variables, you run into trouble. Here are two systems that have the exact same solutions (to wit, none): $$\begin{array}{rcccl} x & + & y & = & 0;\\ x & + & y & = & 1; \end{array}\qquad\text{and}\qquad \begin{array}{rcccl} x & + & 2y & = & 0;\\ x & + & 2y & = & 1. \end{array}$$ But $x+y=0$ cannot be obtained from the second system, since any combination of the equations in the second system will give you an equation in which the coefficient of $y$ is twice the coefficient of $x$.

But if you remove this bad case, then the result is true: two consistent systems that have the same (nonempty) set of solutions are equivalent in the sense you give.

Consider first the case of homogeneous systems (which are always consistent). We can write the system as $A\mathbf{x}=\mathbf{0}$, where $A$ is the $n\times m$ coefficient matrix of your system, with $n$ equations and $m$ unknowns.

A vector $\mathbf{x}_0$ is a solution if and only if it lies in the orthogonal complement of the subspace of $\mathbb{R}^m$ spanned by the rows of $A$ (which corresponds to the equations). If $A\mathbf{x}=\mathbf{0}$ and $B\mathbf{x}=\mathbf{b}$ have the same solution set, then that means that $\mathbf{b}=\mathbf{0}$ (since the solution that assigns every variable to $0$ is a solution to the first system, hence to the second).

But that means that the row space of $A$ has the same orthogonal complement as the rowspace of $B$. In finite dimensional vector spaces, $(\mathbf{W})^{\perp\perp}=\mathbf{W}$. Thus, the row space of $A$ and the rowspace of $B$ have to be equal.

That means that every row of $B$ (every equation in the second system) is a linear combination of the rows of $A$ (the equations of the first system), and every row of $A$ (equations in the first system) is a linear combination of the rows of $B$ (equations of the second system). Thus, the two systems are equivalent.

Now, to consider the more general case of systems of the form $A\mathbf{x}=\mathbf{a}$, note that the solutions to $A\mathbf{x}=\mathbf{a}$ are of the form $$\mathbf{s}_0 + \mathbf{n}$$ where $\mathbf{n}$ is a solution to $A\mathbf{x}=\mathbf{0}$ and $\mathbf{s}_0$ is a specific solution to $A\mathbf{x}=\mathbf{a}$. This follows from the fact all those are solutions, since $$A(\mathbf{s}_0+\mathbf{n}) = A\mathbf{s}_0 + A\mathbf{n} = \mathbf{a} + \mathbf{0} = \mathbf{a}.$$ And, if $\mathbf{s}_1$ is any solution, then $\mathbf{s}_1 = \mathbf{s}_0 + (\mathbf{s}_1-\mathbf{s}_0)$, and $\mathbf{n}=\mathbf{s}_1 -\mathbf{s}_0$ is a solution to $A\mathbf{x}=\mathbf{0}$: $$A(\mathbf{s}_1-\mathbf{s}_0) = A\mathbf{s}_1 - A\mathbf{s}_0 = \mathbf{a}-\mathbf{a}=\mathbf{0}.$$

Suppose that $A\mathbf{x}=\mathbf{a}$ has the same solution set as $B\mathbf{x}=\mathbf{b}$, and that both have at least one solution. Let $S_A$ be the solutions to $A\mathbf{x}=\mathbf{0}$ and let $S_B$ be the solutions to $B\mathbf{x}=\mathbf{0}$. I claim that $S_A = S_B$.

Indeed, let $\mathbf{s}_0$ be a particular solution to $A\mathbf{x}=\mathbf{a}$; then it is also a solution to $B\mathbf{x}=\mathbf{b}$ by assumption; so for every $\mathbf{n}\in S_B$, $\mathbf{s}_0+\mathbf{n}$ is a solution to $B\mathbf{x}=\mathbf{b}$, hence to $A\mathbf{x}=\mathbf{a}$, hence $\mathbf{n}\in S_A$. Thus, $S_B\subseteq S_A$, and a symmetric argument shows that $S_A\subseteq S_B$, so the two are equal.

But we know that if the solution set to $A\mathbf{x}=\mathbf{0}$ is the same as the solution set to $B\mathbf{x}=\mathbf{0}$, then the two systems are equivalent. If we take a row of $B$ and ignore the right hand side, then we can express it as a linear combination of the rows of $A$. If, taking into account the right hand side, we were to get an equation different from the equation we have in $B$, then this would tell us that the solutions to $B\mathbf{x}=\mathbf{b}$ satisfy two equations with identical left hand sides but different right hand sides; this is impossible, since we are assuming the system is consistent. Thus, the linear combination of equations of $A$ that yields the equation of $B$ will also give the same right hand side; so every equation in the second system is a linear combination of the equations in the first system; the converse argument also holds, so the two systems are equivalent.