The key point to understand here is that you really are dealing with two $\mathbb R^2$ here, although it's not that obvious when using the standard basis.
The first $\mathbb R^2$ is your vector space. Let's write this vector space and everything in it in blue. This first $\color{blue}{\mathbb R^2}$ is equipped with a vector space structure and additionally with the dot product $\color{blue}{\mathbf x\cdot\mathbf y = x_1y_1+x_2y_2}$.
Now as soon as you choose a basis $\{\color{blue}{\mathbf b_1},\color{blue}{\mathbf b_2}\}\subset\color{blue}{\mathbb R^2}$, you can write every vector $\color{blue}{\mathbf x}\in\color{blue}{\mathbb R^2}$ in an unique way as $\color{blue}{\mathbf x}=\color{red}{\xi_1}\color{blue}{\mathbf b_1}+\color{red}{\xi_2}\color{blue}{\mathbf b_2}$. Note that $\color{red}{\xi_1}$ and $\color{red}{\xi_2}$ are not the components of the vector in $\color{blue}{\mathbb R^2}$, but are base dependent.
But of course you need always two of them, and when doing vector addition and multiplication with scalar, you'll find they behave exactly like the components of a vector should behave. Therefore it does make sense to consider them as part of a $\color{red}{\mathbb R^2}$ which however is a different $\mathbb R^2$ than the original $\color{blue}{\mathbb R^2}$ we started with. In particular, the coordinate $\color{red}{\mathbb R^2}$ is not pre-equipped with an inner product.
The basis then defines a linear map $\beta$ from the coordinate $\color{red}{\mathbb R^2}$ to the original $\color{blue}{\mathbb R^2}$ given by
$$\beta(\color{red}{\boldsymbol\xi})=\color{red}{\xi_1}\color{blue}{\mathbf b_1} + \color{red}{\xi_2}\color{blue}{\mathbf b_2}.$$
Now remember that I said that the coordinate $\color{red}{\mathbb R^2}$ is not pre-equipped with an inner product. That doesn't mean we cannot give it one. but we want to do it in a way that the product is preserved by the map $\beta$, that is, you want to have
$$\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle} = \beta(\color{red}{\boldsymbol\xi})\color{blue}{\cdot}\beta(\color{red}{\boldsymbol\eta})$$
where $\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle}$ denotes the inner product in the coordinate $\color{red}{\mathbb R^2}$. By inserting the explicit formula of $\beta$, one easily sees that
$$\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle} = \sum_{j,k=1}^2 (\color{blue}{\mathbf b_j\cdot\mathbf b_k})\color{red}{\xi_j\eta_k}.$$
Now quite obviously, if $\{\color{blue}{\mathbf b_1},\color{blue}{\mathbf b_2}\}$ is not an orthogonal basis, then $\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle}\ne\color{red}{\xi_1\eta_1}+\color{red}{\xi_2\eta_2}$, and indeed, the inner product on the coordinate $\color{red}{\mathbb R^2}$ explicitly depends on the chosen basis $\{\color{blue}{\mathbf b_1},\color{blue}{\mathbf b_2}\}$. But that is not really surprising, because the vector in $\color{blue}{\mathbb R^2}$ those coordinates describe does depend on the basis chosen, and of course different vectors in general have different inner products.
Note that by definition of the inner product, with $\beta(\color{red}{\boldsymbol\xi})=\color{blue}{\mathbf x}$ and $\beta(\color{red}{\boldsymbol\eta})=\color{blue}{\mathbf y}$ it is of course still true that
$$\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle} = \color{blue}{\textbf x\cdot\textbf y} = \color{blue}{x_1y_1} + \color{blue}{x_2y_2}.$$
But in general, $\color{blue}{x_1y_1} + \color{blue}{x_2y_2} \ne \color{red}{\xi_1\eta_1}+\color{red}{\xi_2\eta_2}$.
However if you chose the standard basis $\color{blue}{\mathbf b_k}{\mathbf e_k}$ then you obviously have $\color{red}{\xi_k}=\color{blue}{x_k}$ and $\color{red}{\langle \boldsymbol\xi,\boldsymbol\eta\rangle} = \color{blue}{x_1y_1+x_2y_2} = \color{red}{\xi_1\eta_1}+\color{red}{\xi_2\eta_2}.$ This is why it is so easy not to see the fact that you are really working with two different $\mathbb R^2$ when using the standard basis.
A general idea -
There is a plane in $R^3$, meaning - the matrix which would represent that plane would be $[A]$ - a 3x2 matrix with rank $2$ i.e $2$ linearly independent columns.
Now, consider the plane is spanned by the columns of [A] = [$a_1 a_2$] where $a_i$ = columns of $[A]$. By hit and trial we see one vector would be $a_1$ = [2 1 0$]^T$ that is in the plane S.
To get another vector which is orthogonal to this vector and in the plane S, consider $a_2$ = [j k l$]^T$. Satisfying the orthogonality condition we get $\to$ $2j+k=0$ and satisfying that it lies in the plane we get $j-2k+l=0$. So we see a vector satisfying these two conditions is $a_2$ = [1 -2 -5$]^T$. So that is our matrix $[A]_{3*2} = $.
$$
\begin{matrix}
2 & 1 &\\
1 & -2 \\
0 & -5 \\
\end{matrix}
$$
To get the orthogonal projection matrix onto this plane - we compute its orthogonal projector which would be a 3x3 matrix as we are projecting on a plane in $R^3$; $$[P] = A(A^TA)^{-1}A^T$$.
Computing this we get $- [P] =$
$$
\begin{matrix}
5/6 & 1/3 & -1/6\\
1/3 & 1/3 & 1/3 \\
-1/6 & 1/3 & 5/6\\
\end{matrix}
$$
This is the orthogonal projector matrix onto the plane S $\to$ $x−2y+z=0$. So the orthogonal projection of any vector v $\to$ $v^{"}$ would be $v^{"} = [P][v]$.
Best Answer
Find a basis for the subspace. Extend to a basis of the whole vector space. Orthogonalize the entire basis using Gram-Schmidt, with the basis of the subspace first; this will give you an orthogonal basis of the subspace, and the remaining vectors will form a basis for the orthogonal complement.