Recall that if a vector space has a finite basis it is said to be finite dimensional, and the dimension is defined to be the number of vectors that make up this basis. Basis are (possibly finite) sets of vectors that span the vector space and are linearly independent. One can prove that every vector in said vector space can be written in one and only one way as a linear combination of these basis vectors. Say $V$ is a $K$ vector space with basis $B=\{v_1,\ldots,v_n\}$. Then if we have $$v=\alpha_1v_1+\cdots+\alpha_nv_n$$
we write $(v)_B=(\alpha_1,\ldots,\alpha_n)$ and say $v$ has coordinates $(\alpha_1,\ldots,\alpha_n)$ in the basis $B$. This immediately gives a mapping $V\to F^n$ given by $$v\mapsto (v)_B$$
This is the same as mapping each basis vector $v_i$ to $$(0,0,\ldots,\underbrace{1}_i,\ldots,0)$$
which entirely determines the transformation.
Note that $0\mapsto (0,0,\ldots,0)$; that $(v+w)_B=(v)_B+(w)_B$ and $(\lambda v)_B=\lambda (v)_B$ so this is a linear transformation, which gives an isomorphism between $V$ and $F^n$. This means $V$ and $F^n$ are essentially the same as vector spaces, that is, "there is only one vector space of dimension $n$ over a field $F$ up to isomorphism."
Given a subspace $V$ of $\Bbb R^n$ spanned by $m$ vectors $v_1,\dots,v_m$, it is your choice whether to put the vectors in as the rows of an $m\times n$ matrix $A$ or as the columns of an $n\times m$ matrix $B$. You will have $V = \text{Row space}(A)$ and $V=\text{Column space}(B)$. Whichever algorithm you prefer to find a basis is up to you. (Of course, $B = A^\top$.)
Here is an effective difference. When you use the row space approach, the basis you give will consist of vectors perhaps not obviously related to your original $v_1,\dots,v_m$, whereas when you use the column space approach, you will be providing a subset of the original set of vectors as your basis. So, specifically, if you want to obtain a basis by discarding some of your original vectors and using the remaining ones, you will want the column space approach.
Best Answer
Two vectors are orthogonal if their inner product is zero. In other words $\langle u,v\rangle =0$. They are orthonormal if they are orthogonal, and additionally each vector has norm $1$. In other words $\langle u,v \rangle =0$ and $\langle u,u\rangle = \langle v,v\rangle =1$.
Example
For vectors in $\mathbb{R}^3$ let
$$ u \;\; =\;\; \left[ \begin{array}{c} 1\\ 2\\ 0\\ \end{array} \right ] \hspace{2pc} v \;\; =\;\; \left [ \begin{array}{c} 0\\ 0\\ 3\\ \end{array} \right ]. $$
The vectors $u$ and $v$ are orthogonal since
$$ \langle u, v\rangle \;\; =\;\; 1\cdot 0 + 2\cdot 0 + 0\cdot 3 \;\; =\;\; 0 $$
but they are not orthonormal since $||u|| = \sqrt{\langle u,u\rangle } = \sqrt{1 + 4} = \sqrt{5}$ and $||v|| = \sqrt{\langle v,v\rangle } = \sqrt{3^2} = 3$. If we define new vectors $\hat{u} = \frac{u}{||u||}$ and $\hat{v} = \frac{v}{||v||}$ then $\hat{u}$ and $\hat{v}$ are orthonormal since they each now have norm $1$, and orthogonality is preserved since $\langle \hat{u}, \hat{v}\rangle = \frac{\langle u,v\rangle }{||u||\cdot ||v||} = 0$.