Mathematics is really about relations between things. Therefore, while constructions are good and useful, you should never take them very seriously.
Construction agnosticism
Consider the real and complex numbers. Given $\mathbb R$, you can construct a ring which is isomorphic to $\mathbb C$ by taking pairs of real numbers and defining addition and multiplication in the usual way. Note here that I say that you can define a ring isomorphic to $\mathbb C$. We can ask the following question:
- Is the ring we have defined actually the ring of complex numbers $\mathbb C$ itself?
But you shouldn't ask yourself that question. (Just because you can ask a question, doesn't mean you should.) It won't do you any harm; it's just not useful to.
What really matters? That there is a ring which has all the properties that we want from the complex numbers — such a ring exists. Then we can say: let $\mathbb C$ be such a ring, and then study the maps $f: \mathbb C \to \mathbb C$.
When we say "the" complex number field, the word "the" is a red herring; we just want to talk about some ring which works that way. Similarly, we don't really care about "the" real numbers. If $\mathbb C$ is "the" complex number field, then it contains a principal ideal domain $Z$ coinciding with the abelian group generated by the multiplicative identity; a quotient field $Q$ induced by $Z$; and a subfield $F$ which is the analytic completion of $Q$. We can then call these "the" real numbers. Just as we don't care if "the" complex numbers consist of ordered pairs of objects from some ring $S$, we don't care if "the" real numbers are the ring $S$ or some set of ordered pairs $(s,0)$ for $s \in S$ and $0$ the additive identity of $S$. It just doesn't matter — we just care about the proper subfield $F \subseteq \mathbb C$ which has all of the same properties as the real numbers, so we may as well adopt the convention that this subfield is the field of real numbers.
This applies to the integers as well. The von Neumann construction of the natural numbers makes $3 = \{ \varnothing, \{\varnothing\}, \{\varnothing,\{\varnothing\}\}\} $. Does this mean that $3$ "really is" a set which e.g. has the empty set as a member? Not really, because these ideas are totally irrelevant to what we care about the number $3$. We could consider any other "construction" of the natural numbers, in which case $3$ might not be a set at all (for instance, if we consider a set theory in which the natural numbers are atoms), in which case it is not only irrelevant to consider the maps $f:3\to3$, but these would not even be defined. All we care about is that $3$ is part of a collection of objects $\mathbb N$ which forms a monoid with some specific properties. The "true identity" of $3$ is beside the point.
Mathematical "interfaces" (in place of "foundations")
If this ambiguity bothers you, you can think of it axiomatically as follows: treat each of the sets we care about — such as $\mathbb N$, $\mathbb Z$, $\mathbb Q$, $\mathbb R$, and $\mathbb C$ — as underspecified objects, where we specify all of the properties about them that we could care about, and only those properties.
von Neumann's construction of ordinals describes a certain countable well-ordered monoid: we don't define $\mathbb N$ to be that monoid, but merely say that it is isomorphic to it, leaving further details to be filled in later.
From equivalence classes of ordered pairs of elements of $\mathbb N$, you can define an ordered ring $Z$, which contains a monoid $M \cong \mathbb N$ and which is the closure of $M$ under differences. Now, $\mathbb N$ can never be a subset of this set of equivalence classes; but it can be a subset of some other set. We never pretended to characterize precisely what object $\mathbb
N$ is, so who is to say that $\mathbb N$ is not itself contained in
a ring which is isomorphic to $Z$? Nobody, that's who; without loss of generality we
may define $\mathbb Z$ to be a ring isomorphic to $Z$, and declare as
a refinement of the earlier specification that in fact $\mathbb N$ is
contained in $\mathbb Z$.
- We may similarly declare that $\mathbb Z \subseteq \mathbb Q$, where $\mathbb Q$ is isomorphic to the ring of equivalence classes of ordered pairs over $\mathbb Z$ in the usual way. We also declare that $\mathbb Q \subseteq \mathbb R$, where $\mathbb R$ is isomorphic to the field of Dedekind cuts over $\mathbb Q$, or (equivalently) to the field of equivalence classes of Cauchy sequences, or any of the typical constructions of the real numbers. The set $\mathbb R$ isn't defined to be any particular one of these constructions, because (a) any of these constructions is as good as the others, and (b) we don't really care about any of the details lying underneath any of the constructions, so long as the properties we care about hold for each.
You should think of these refinements as axioms which we add during the process of doing mathematics.
A definition is in the first place is only an axiom: one which defines a constant, such as defining ∅
by asserting ∀x:¬(x∈∅)
. These mathematical underspecifications — mathematical interfaces — are also axioms: having proven that a certain sort of monoid exists satisfying the Peano axioms, we assert that $\mathbb N$ is such a monoid, saying nothing more until it suits us to; and similarly declaring that $\mathbb C$ is a sort of number field of a kind which we've proven exists, and which happens to contain the field $\mathbb R$ which we mentioned previously without quite defining it completely.
Fundamentally, this approach to mathematics is not really all that different from what we usually do: it merely substitutes complete descriptions (what Bertrand Russell would call simply "a description") for objects, with partial descriptions. But pragmatically, in the real world as in mathematics, partial descriptions tend to be all that we care about (and in the real world, they are all that we ever have access to). Embracing this allows you to focus on what really matters.
If you are a mathematical "realist", in which the real numbers has an identity separate from our descriptions of it and has some fixed location in the mathematical firmament, this sort of wishy-washiness as to the "exact identity" of these objects may bother you. After all, if you imagine the possible identities of the objects $\mathbb N$, $\mathbb Z$, $\mathbb R$, etc. as you subsume them into more and more complicated objects, it would seem that the set of objects with which a set such as $\mathbb N$ could be identified recedes to infinity as our mathematical framework grows more elaborate. To this I can only say, "so much the worse for realism". If you want the freedom to construct objects and only concern yourself with the relationships that matter, in the end it is better to abandon this preoccupation with the precise identity of a mathematical object, and engage in mathematics as the creative, descriptive, and above all incomplete and ongoing endeavor that it is.
We know that elementary row operations
do not change the row space of the matrix.
And if a matrix is in rref, then it is relatively easy to check whether a vector belongs to the row space.
So suppose you have matrix $A$ and a reduced row echelon matrix $B$. If $R_A$ and $R_B$ are row spaces, you can easily check whether $R_A\subseteq R_B$. Of cause, this is only "half"1 of the verification whether $R_A=R_B$, which is equivalent to $A\sim B$.
Example. Suppose that I have a matrix $$A=
\begin{pmatrix}
1 & 1 & 1 & 2 \\
1 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 \\
1 & 2 & 1 & 1 \\
\end{pmatrix}.$$
And that after Gaussian elimination I get: $$B=
\begin{pmatrix}
1 & 0 & 0 & 1 \\
0 & 1 & 0 &-1 \\
0 & 0 & 1 & 2 \\
0 & 0 & 0 & 0 \\
\end{pmatrix}
$$
To check whether $R_A\subseteq R_B$ it suffices to check whether each row of $A$ is a linear combination of $(1,0,0,1)$, $(0,1,0,-1)$ and $(0,0,1,2)$ , i.e., whether it is of the form $c_1(1,0,0,1)+c_2(0,1,0,-1)+c_3(0,0,1,2)$. But since these vectors are very simple, we can see that on coordinates where there are pivots we get $c_1$, $c_2$ and $c_3$. So it is easy to find coefficients.
Let us try with the fourth row: $(1,2,1,1)$.
We look at the first three coordinates. (Those are the coordinates with the pivots.) And we check whether
$$(\boxed{1},\boxed{2},\boxed{1},1)=
1\cdot(1,0,0,1)+2\cdot(0,1,0,-1)+1\cdot(0,0,1,2)
$$
We see that this is true. If the same thing works for each row of $A$, this shows that $R_A\subseteq R_B$.
Let me try now another example where I make a mistake on purpose to see how to find the mistake.
$$\begin{pmatrix}
1 & 1 & 1 & 2 \\
1 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 \\
1 & 2 & 1 & 1 \\
\end{pmatrix}\overset{(1)}\sim
\begin{pmatrix}
0 & 0 & 1 & 1 \\
1 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 \\
1 & 2 & 1 & 1 \\
\end{pmatrix}\overset{(2)}\sim
\begin{pmatrix}
0 & 0 & 1 & 1 \\
1 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
1 & 2 & 0 & 0 \\
\end{pmatrix}\overset{(3)}\sim
\begin{pmatrix}
0 & 0 & 1 & 1 \\
1 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
\end{pmatrix}\overset{(4)}\sim
\begin{pmatrix}
0 & 0 & 1 & 1 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{pmatrix}\overset{(5)}\sim
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 1 \\
0 & 0 & 0 & 0 \\
\end{pmatrix}
$$
We can check that
$$(1,1,1,2)\ne 1\cdot(1,0,0,0)+1\cdot(0,1,0,0)+1\cdot(0,0,1,1).$$
I can even make the same verification for the matrix after each step. For example, for the matrix after step $(2)$, i.e., $\begin{pmatrix}
0 & 0 & 1 & 1 \\
1 & 1 & 0 & 0 \\
0 & 1 & 0 & 0 \\
1 & 2 & 0 & 0 \\
\end{pmatrix}$, everything works. So some error must be before this step.
I will stress once again that this is only halfway verification. I have only checked $R_A\subseteq R_B$ but not $R_B\subseteq R_A$.
So it is possible that I make a mistake which I do not notice in this way. Here is a (rather naive) example
$$\begin{pmatrix}
1 & 1 & 1 & 2 \\
1 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 \\
1 & 2 & 1 & 1 \\
\end{pmatrix}\sim
\begin{pmatrix}
1 & 1 & 1 & 2 \\
1 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 \\
0 & 0 & 0 &-1 \\
\end{pmatrix}\sim
\begin{pmatrix}
1 & 1 & 1 & 2 \\
1 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 \\
0 & 0 & 0 & 1 \\
\end{pmatrix}\sim
\begin{pmatrix}
1 & 1 & 1 & 0 \\
1 & 1 & 0 & 0 \\
0 & 1 & 1 & 0 \\
0 & 0 & 0 & 1 \\
\end{pmatrix}\sim
\begin{pmatrix}
1 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 \\
0 & 1 & 1 & 0 \\
0 & 0 & 0 & 1 \\
\end{pmatrix}\sim
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
\end{pmatrix}
$$
The sanity check described above works. (We check that $R_A\subseteq R_B$, which is true.) But the result is incorrect.
If I want to be able to check both inclusions and additionally to be able to make a check after each step, I can use extended matrix. (But this is much more work.)
In our example I would do the following
$$
\left(\begin{array}{cccc|cccc}
1 & 1 & 1 & 2 & 1 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\
1 & 2 & 1 & 1 & 0 & 0 & 0 & 1 \\
\end{array}\right)\sim
\left(\begin{array}{cccc|cccc}
0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\
1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\
1 & 1 & 0 & 0 & 0 & 0 &-1 & 1 \\
\end{array}\right)\sim
\left(\begin{array}{cccc|cccc}
0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\
1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\
\end{array}\right)\sim
\left(\begin{array}{cccc|cccc}
1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\
\end{array}\right)\sim
\left(\begin{array}{cccc|cccc}
1 & 0 &-1 &-1 & 0 & 1 &-1 & 0 \\
0 & 1 & 1 & 1 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\
\end{array}\right)\sim
\left(\begin{array}{cccc|cccc}
1 & 0 & 0 & 1 & 1 & 0 &-1 & 0 \\
0 & 1 & 0 &-1 &-1 & 1 & 1 & 0 \\
0 & 0 & 1 & 2 & 1 &-1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\
\end{array}\right)
$$
Now the four numbers on the right are coefficients which tell me how to get this row as a linear combination of the linear matrix. For example, if I look at the first row, I can check that
$$1\cdot(1,1,1,2)-1\cdot(0,1,1,1)=(1,0,0,1).$$
By making a similar verification for each I can test that $R_A\subseteq R_B$.
Notice that I can do this also halfway through the computation. For example, if I look at the last row of the third matrix, I have there
$$\left(\begin{array}{cccc|cccc}
0 & 0 & 0 & 0 & 0 &-1 &-1 & 1 \\
\end{array}\right)$$
And I can check that
$$-1\cdot(1,1,0,0)-1\cdot(0,1,1,1)+1\cdot(1,2,1,1)=(0,0,0,0).$$
1 This is similar to the advice given in comment. If you are using Gaussian elimination to solve a linear system, you can check whether the solution you got is indeed a solution. But it is still possible that you do not have all solutions. So this is just a "half-check".
Best Answer
If you have time, you could always do the calculation twice, once with the top row as a starting point and one (say) with the bottom row. For example: $$\begin{vmatrix} a & b & c\\d & e & f\\g & h & i \end{vmatrix}=$$ $$a(ei - fh) -b(di -fg) + c(dh - eg)$$ Or: $$g(bf - ce) - h(af -cd) +i(ae -bd)$$
Of course these give the same result, just with a different order of calculations. If you calculate both by hand and get different results, you know you have an error.
The method suggested bit Git Gud in the comments can also be used, i.e. add scalar multiples of the different rows to each other to get a triangular matrix. A worked example may be found here.