The column space of $A$ is $\operatorname{span}\left(\begin{pmatrix} 1 \\ -1 \\ 1 \end{pmatrix}, \begin{pmatrix} 2 \\ 4 \\ 2 \end{pmatrix}\right)$.
Those two vectors are a basis for $\operatorname{col}(A)$, but they are not normalized.
NOTE: In this case, the columns of $A$ are already orthogonal so you don't need to use the Gram-Schmidt process, but since in general they won't be, I'll just explain it anyway.
To make them orthogonal, we use the Gram-Schmidt process:
$w_1 = \begin{pmatrix} 1 \\ -1 \\ 1 \end{pmatrix}$ and $w_2 = \begin{pmatrix} 2 \\ 4 \\ 2 \end{pmatrix} - \operatorname{proj}_{w_1} \begin{pmatrix} 2 \\ 4 \\ 2 \end{pmatrix}$, where $\operatorname{proj}_{w_1} \begin{pmatrix} 2 \\ 4 \\ 2 \end{pmatrix}$ is the orthogonal projection of $\begin{pmatrix} 2 \\ 4 \\ 2 \end{pmatrix}$ onto the subspace $\operatorname{span}(w_1)$.
In general, $\operatorname{proj}_vu = \dfrac {u \cdot v}{v\cdot v}v$.
Then to normalize a vector, you divide it by its norm:
$u_1 = \dfrac {w_1}{\|w_1\|}$ and $u_2 = \dfrac{w_2}{\|w_2\|}$.
The norm of a vector $v$, denoted $\|v\|$, is given by $\|v\|=\sqrt{v\cdot v}$.
This is how $u_1$ and $u_2$ were obtained from the columns of $A$.
Then the orthogonal projection of $b$ onto the subspace $\operatorname{col}(A)$ is given by $\operatorname{proj}_{\operatorname{col}(A)}b = \operatorname{proj}_{u_1}b + \operatorname{proj}_{u_2}b$.
If you have, say a $4$ by $4$ square matrix that's represendted by $A=[c_1,c_2,c_3,c_4]$ where $c_i$'s are colums, then you now that given a column vector $v$, $Av=\sum_i c_iv_i$ where $v_i$ are elements of $v$.
So by finding basese for $M^T$ and $M$ you know basis for whole space since $\mathbb{R}^4=M^T \oplus M$ and to find projection of some vector $v$ onto these spaces you need to represent $v$ in terms of these basis vectors. So you are trying to find $a_i$ such that $[v_1,v_2,v_3,v_4] a = v$ where $a$ is column vector with elements $a_i$ and $v_i$ are bases vectors in column form.
Since you know $v_i$'s are linearly independent you know $A=[v_i]$ is an invertible matrix, and inverse can easily be found using Gaussian elimination - so you can easily find $a$, and you are done
Best Answer
A general idea - There is a plane in $R^3$, meaning - the matrix which would represent that plane would be $[A]$ - a 3x2 matrix with rank $2$ i.e $2$ linearly independent columns.
Now, consider the plane is spanned by the columns of [A] = [$a_1 a_2$] where $a_i$ = columns of $[A]$. By hit and trial we see one vector would be $a_1$ = [2 1 0$]^T$ that is in the plane S.
To get another vector which is orthogonal to this vector and in the plane S, consider $a_2$ = [j k l$]^T$. Satisfying the orthogonality condition we get $\to$ $2j+k=0$ and satisfying that it lies in the plane we get $j-2k+l=0$. So we see a vector satisfying these two conditions is $a_2$ = [1 -2 -5$]^T$. So that is our matrix $[A]_{3*2} = $.
$$ \begin{matrix} 2 & 1 &\\ 1 & -2 \\ 0 & -5 \\ \end{matrix} $$
To get the orthogonal projection matrix onto this plane - we compute its orthogonal projector which would be a 3x3 matrix as we are projecting on a plane in $R^3$; $$[P] = A(A^TA)^{-1}A^T$$. Computing this we get $- [P] =$ $$ \begin{matrix} 5/6 & 1/3 & -1/6\\ 1/3 & 1/3 & 1/3 \\ -1/6 & 1/3 & 5/6\\ \end{matrix} $$
This is the orthogonal projector matrix onto the plane S $\to$ $x−2y+z=0$. So the orthogonal projection of any vector v $\to$ $v^{"}$ would be $v^{"} = [P][v]$.