Those two rows do provide a basis of the row space, yes - it's not the only one, but it answers the question so it will do.
The point is that the rows have the same span after Gaussian elimination as they do before, but in row reduced echelon form the non-zero rows are linearly independent, so they are a basis, rather than just a spanning set.
If you know the sifting algorithm you could also just apply that to the rows of the original matrix, to get a basis for the row space whose members are actually rows of $A$.
I will explain this looking at a much simpler example, that is something in the 2-dimensional case. Say we have the following equations :
\begin{equation}
\begin{aligned}
2x + 3y & =5&\text{ (1) } \\
x + 3y &= 4 & \text{ (2) }
\end{aligned}
\end{equation}
This system can be represented as follows:
$$\begin{pmatrix} 2 & 3 \\ 1 & 3 \end{pmatrix} \begin{pmatrix}x \\ y \end{pmatrix} = \begin{pmatrix} 5 \\ 4 \end{pmatrix} $$
When doing row reduction, I am allowed to do the following operations :
(1) Interchanging two rows
(2) Multiplying a row by a non-zero scalar.
(3) Adding a multiple of one row to another row
All these operations on the matrix translate to the operations we are familiar with when solving a system of linear equations. For example , subtracting equation $2$ from $1$ will result in the equation $x = 1$. On the matrix this means subtracting row $2$ from row $1$ on both sides or on the augmented matrix, which gives
$$\begin{pmatrix} 1 & 0 \\ 1 & 3 \end{pmatrix}\begin{pmatrix}x \\ y \end{pmatrix} = \begin{pmatrix} 1 \\ 4 \end{pmatrix}$$ To simplify further, we can subtract row $1$ from $2$ and it follows
$$\begin{pmatrix} 1 & 0 \\ 0 & 3 \end{pmatrix}\begin{pmatrix}x \\ y \end{pmatrix} = \begin{pmatrix} 1 \\ 3 \end{pmatrix}$$
Why are we doing this ? Matrices became more than just a tool for solving linear equations. They became algebraic objects themselves, with their many properties. Read A.CALEY, A memoir on the theory of matrices. Sorry, I digress.
You can also do column operations but then the matrices have to be different
$$\begin{pmatrix} x & y \end{pmatrix}\begin{pmatrix} 2 &1 \\ 3 & 3 \end{pmatrix} = \begin{pmatrix} 5 & 4 \end{pmatrix}$$ Why don't we represent it this way ? You tell me. I didn't answer your question directly , but I think with the right motivation you will find your way.
Now coming back to linear independence, say we have the vectors $u_1 = \begin{pmatrix} 2 \\ 0 \end{pmatrix} $ and $u_2= \begin{pmatrix} 1 \\ 2 \end{pmatrix} $. As you mentioned, the vectors are linearly independent if the system of equations has only a trivial solution, that is, $xu_1 + yu_1 = 0 $ if $x=y=0$ which means $$\begin{pmatrix} 2x \\ 0 \end{pmatrix} + \begin{pmatrix} y \\ 2y \end{pmatrix} = \begin{pmatrix} 2x +y \\ 2y \end{pmatrix} = \begin{pmatrix} 2x +y \\ 0x + 2y \end{pmatrix} = \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} $$ if $$ \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} $$ Now the problem of finding whether a set of vectors are linearly independent has been reduced to a problem of finding a solution to a system of linear equations. It is to be noted that $$ \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix} = \begin{pmatrix} u_1, u_2 \end{pmatrix}$$ It is just a representation which is convenient.
Best Answer
If $c^1,\ldots, c^n$ are the columns of a matrix and $d^1,\ldots, d^n$ the columns of a transformed matrix after some row operations, then $d^i$ is the expression of $c^i$ in a different basis. The transition matrix to this new basis is formed by the columns of $R^{-1}$ where $D=RC$ is the relation between $D=(d^1,\ldots, d^n)$ and $C=(c^1,\ldots, c^n)$.
(It follows from this that, if $c^1$ and $c^2$ are independent -- for example -- then $d^1$ and $d^2$ will be also independent, etc.)