[Math] Why doesn’t Gaussian elimination change the solution set

gaussian eliminationlinear algebra

Of course, Gaussian elimination is safe to use as proven by the countless systems I've solved with it while practising my linear algebra (which is I must add very basic and low-level), but when Jim Hefferon's free textbook on linear algebra raised the question why

  1. Scaling rows by a non-zero constant
  2. Adding rows to each other

Do not change the solution set, I found myself incapable of giving correct mathematical proof.

In his answer manual, Hefferon himself "proves" the safety of both operations by showing that each operation can be reversed without adding or losing solutions.

For example, if a row is scaled by a nonzero constant C, this operation can be reversed by dividing both sides by C, without losing or creating solutions.

Does this satisfy as mathematical proof? To me it seems like proving and operation through its usage isn't exactly proving its correctness, because it makes the assumption that the solution set wasnt changed between performing and reversing the operation. If this indeed does not satisfy as proof, then what would be proof that no solutions are lost performing Gaussian elimination?

Best Answer

ALmost. Maybe the best way to see this is as follows: If $C$ is an invertible matrix that the set $S_1$ of vectors $x$ such that $Ax=b$ is the same as the set $S_2$ of vectors $x$ such that $CAx=Cb$. This follows because for each $x$ with $Ax=b$ we immediately find $C\cdot Ax=Cb$ and vice verse for each $x$ with $CAx=Cb$ we find by multiplying with $C^{-1}$ (which exists by the assumption of invertibility) $Ax=C^{-1}CAx=C^{-1}Cb=b$. Therefore we have bothe $S_1\subseteq S_2$ and $S_2\subseteq S_1$.

Now to apply trhis to Gauss elimination observe that the single steps (scaling a row, adding a row to another row, swapping rows) can be accieved by multiplying with a suitable simple matrix $C$ with an easily found inverse.

Related Question