Your procedures are rather non-standard in terms of finding basis for columnspaces and rowspaces. In particular your second procedure for finding the rowspace is not only unorthodox, but appears to be incorrect. To fully answer all of your questions takes a bit of time, and I apologize in advance for the length. The fact is, you can find both basis in a single shot by reducing the original matrix, say $A$, to its RREF, say $R$.
First of all, the RREF is fundamentally defined in terms of the rows. That's why it is the Reduced Row Echelon Form. Elementary row operations by design do not change the rowspace; each new row remains a linear combination of the old rows. That means you can row reduce your matrix all you want, but your rowspace remains the same. It follows that $R$ and $A$ share a rowspace.
But the rows of $R$ are linearly independent: each row has an entry, its leading pivot entry, that no other row has. The rows of $R$ clearly also span the rowspace of $R$, just by definition, and therefore it is a basis for the rowspace of $R$. Now since $\mathrm{row}(R) = \mathrm{row}(A)$, it follows that the rows of $R$ also form a basis for the rowspace of $A$. There is no need to go back to the original matrix because the rowspace has not changed between elementary row operations.
Finding a basis for the columnspace is a bit more complicated because elementary row operations does not preserve the columnspace. The columns of $R$ do not necessarily span the same space as the columns of $A$. Of course, the columnspace of $A$ is just the rowspace of $A^\mathrm{T}$ so we can always apply the previous procedure to $A^\mathrm{T}$. I believe this is what your second procedure tries to do, but it doesn't get it quite right. More on that later. In any case, reducing two matrices is hard work and luckily we can actually kill two birds with one stone.
If we encode the sequence of elementary row operations which take $A$ to $R$ as elementary matrices, then we can write
$$E_k\cdots E_1 A = R$$
where each $E_i$ is an elementary matrix. Let us just collectively write
$$EA = R$$
where $E$ is the product of all our elementary matrices. The important thing here is that $E$ is an invertible matrix since each $E_i$ is invertible. This fact actually characterizes row equivalent matrices: two (same sized) matrices $A$ and $B$ are row equivalent if and only if there exists an invertible matrix $E$ such that $A=EB$.
Let us write $\mathbf{a}_i$ as the $i$th column of $A$ and $\mathbf{r}_i$ as the $i$th column of $R$. By block matrix multiplication, it follows that
$$E\mathbf{a}_i = \mathbf{r}_i$$
My claim now is that linear relations are preserved between the columns of $A$ and $R$, i.e. for any set of scalar coefficients $\{c_i\}$, we have
$$\mathbf{0} = \sum_{i=1}^n c_i\mathbf{a}_i \iff \mathbf{0}=\sum_{i=1}^n c_i \mathbf{r}_i$$
The proof is simple:
$$\mathbf{0} = \sum_{i=1}^n c_i\mathbf{r}_i \iff \mathbf{0}=\sum_{i=1}^n c_iE\mathbf{a_i} \iff \mathbf{0}= E\left(\sum_{i=1}^n c_i\mathbf{a}_i\right) \iff \mathbf{0} = \sum_{i=1}^n c_i\mathbf{a}_i$$
The last line follows because $E$ is invertible and therefore its nullspace is just $\{\mathbf{0}\}$.
The above relations are very powerful and we have some immediate corollaries:
Let $\mathcal{A}=\{\mathbf{a}_{i_k}\}$ be a subset of $\{\mathbf{a}_i\}$ and let $\mathcal{R} = \{\mathbf{r}_{i_k}\}$ be the corresponding subset of $\{\mathbf{r}_i\}$. Then
$\mathcal{A}$ is linearly independent if and only if $\mathcal{R}$ is linearly independent.
$\mathcal{A}$ spans $\mathrm{col}(A)$ if and only if $\mathcal{R}$ spans $\mathrm{col}(R)$.
The two points above imply that $\mathcal{A}$ forms a basis for $\mathrm{col}(A)$ if and only if $\mathcal{R}$ forms a basis for $\mathrm{col}(R)$.
I will not prove the above propositions, the answer is getting a bit long and I think it would make a good exercise. I encourage you to attempt a proof yourself. The implications of the corollaries are straightforward: if you are given a basis for the columnspace of $R$, taking the corresponding vectors will give a basis for $A$. Taking the pivot columns of $R$ serves as an immediate basis for $\mathrm{col}(R)$ and the corresponding columns of $A$ serves as a basis for $\mathrm{col}(A)$. This is the underlying theory behind your first procedure.
To reiterate, take care to note that the pivot columns of $R$ do not form a basis for $\mathrm{col}(A)$ themselves, only the corresponding columns of $A$ do. Elementary row operations are designed to preserve the rowspace and not the columnspace. Oftentimes, $\mathrm{col}(A) \neq \mathrm{col}(R)$ and it makes no sense to take columns of $R$ to act as a basis for the columnspace of $A$. This is the fundamental reason why one of the procedures need to go back to the original matrix while the other one does not.
This now enables you to efficiently find the basis for the rowspace and columnspace all in one go. And we've explained how your first procedure works. With the benefit of hindsight, you should be able to see why your second procedure is a bit weird. You're row reducing the transpose, which means that you are essentially preforming elementary column operations on your original matrix. These operations do not preserve the rowspace and so the procedure as you've described it cannot be correct. At best, you are just preforming the procedure for finding the columnspace basis on $A^\mathrm{T}$, but that still requires you to go back to your original matrix. Even if your second procedure were correct, it turns out to be largely redundant as we've shown that simply reducing $A$ to its RREF $R$ is sufficient to find a basis for the columnspace and the rowspace (and the nullspace if you want).
I will explain this looking at a much simpler example, that is something in the 2-dimensional case. Say we have the following equations :
\begin{equation}
\begin{aligned}
2x + 3y & =5&\text{ (1) } \\
x + 3y &= 4 & \text{ (2) }
\end{aligned}
\end{equation}
This system can be represented as follows:
$$\begin{pmatrix} 2 & 3 \\ 1 & 3 \end{pmatrix} \begin{pmatrix}x \\ y \end{pmatrix} = \begin{pmatrix} 5 \\ 4 \end{pmatrix} $$
When doing row reduction, I am allowed to do the following operations :
(1) Interchanging two rows
(2) Multiplying a row by a non-zero scalar.
(3) Adding a multiple of one row to another row
All these operations on the matrix translate to the operations we are familiar with when solving a system of linear equations. For example , subtracting equation $2$ from $1$ will result in the equation $x = 1$. On the matrix this means subtracting row $2$ from row $1$ on both sides or on the augmented matrix, which gives
$$\begin{pmatrix} 1 & 0 \\ 1 & 3 \end{pmatrix}\begin{pmatrix}x \\ y \end{pmatrix} = \begin{pmatrix} 1 \\ 4 \end{pmatrix}$$ To simplify further, we can subtract row $1$ from $2$ and it follows
$$\begin{pmatrix} 1 & 0 \\ 0 & 3 \end{pmatrix}\begin{pmatrix}x \\ y \end{pmatrix} = \begin{pmatrix} 1 \\ 3 \end{pmatrix}$$
Why are we doing this ? Matrices became more than just a tool for solving linear equations. They became algebraic objects themselves, with their many properties. Read A.CALEY, A memoir on the theory of matrices. Sorry, I digress.
You can also do column operations but then the matrices have to be different
$$\begin{pmatrix} x & y \end{pmatrix}\begin{pmatrix} 2 &1 \\ 3 & 3 \end{pmatrix} = \begin{pmatrix} 5 & 4 \end{pmatrix}$$ Why don't we represent it this way ? You tell me. I didn't answer your question directly , but I think with the right motivation you will find your way.
Now coming back to linear independence, say we have the vectors $u_1 = \begin{pmatrix} 2 \\ 0 \end{pmatrix} $ and $u_2= \begin{pmatrix} 1 \\ 2 \end{pmatrix} $. As you mentioned, the vectors are linearly independent if the system of equations has only a trivial solution, that is, $xu_1 + yu_1 = 0 $ if $x=y=0$ which means $$\begin{pmatrix} 2x \\ 0 \end{pmatrix} + \begin{pmatrix} y \\ 2y \end{pmatrix} = \begin{pmatrix} 2x +y \\ 2y \end{pmatrix} = \begin{pmatrix} 2x +y \\ 0x + 2y \end{pmatrix} = \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix}\begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} $$ if $$ \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \end{pmatrix} $$ Now the problem of finding whether a set of vectors are linearly independent has been reduced to a problem of finding a solution to a system of linear equations. It is to be noted that $$ \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix} = \begin{pmatrix} u_1, u_2 \end{pmatrix}$$ It is just a representation which is convenient.
Best Answer
In one sense, you can say that a vector is simply an object with certain properties, and it is neither a row of numbers nor a column of numbers. But in practice, we often want to use a list of $n$ numeric coordinates to describe an $n$-dimensional vector, and we call this list of coordinates a vector. The general convention seems to be that the coordinates are listed in the format known as a column vector, which is (or at least, which acts like) an $n \times 1$ matrix.
This has the nice property that if $v$ is a vector and $M$ is a matrix representing a linear transformation, the product $Mx$, computed by the usual rules of matrix multiplication, is another vector (specifically, a column vector) representing the image of $v$ under that transformation.
But because we write mostly in a horizontal direction and it is not always convenient to list the coordinates of a vector from left to right. If you're careful, you might write
$$ \langle x_1, x_2, \ldots, x_n \rangle^T $$
meaning the transpose of the row vector $\langle x_1, x_2, \ldots, x_n \rangle$; that is, we want the convenience of left-to-right notation but we make it clear that we actually mean a column vector (which is what you get when you transpose a row vector). If we're not being careful, however, we might just write $\langle x_1, x_2, \ldots, x_n \rangle$ as our "vector" and assume everyone will understand what we mean.
Occasionally we actually need the coordinates of a vector in row-vector format, in which case we can represent that by transposing a column vector. For example, if $u$ and $v$ are vectors (that is, column vectors), then the usual inner product of $u$ and $v$ can be written $u^T v$, evaluated as the product of a $1\times n$ matrix with an $n \times 1$ matrix. Note that if $u$ is a (column) vector, then $u^T$ really is a row vector and can (and should) legitimately be written as $\langle u_1, u_2, \ldots, u_n \rangle$.
This all works out quite neatly and conveniently when people are careful and precise in how they write things. At a deeper and more abstract level you can formalize these ideas as shown in another answer. (My answer here is relatively informal, intended merely to give a sense of why people think of the column vector as "the" representation of an abstract vector.)
When people are not careful and precise it may help to say to yourself sometimes that the transpose of a certain vector representation is intended in a certain context even though the person writing that representation neglected to indicate it.