If you have, say a $4$ by $4$ square matrix that's represendted by $A=[c_1,c_2,c_3,c_4]$ where $c_i$'s are colums, then you now that given a column vector $v$, $Av=\sum_i c_iv_i$ where $v_i$ are elements of $v$.
So by finding basese for $M^T$ and $M$ you know basis for whole space since $\mathbb{R}^4=M^T \oplus M$ and to find projection of some vector $v$ onto these spaces you need to represent $v$ in terms of these basis vectors. So you are trying to find $a_i$ such that $[v_1,v_2,v_3,v_4] a = v$ where $a$ is column vector with elements $a_i$ and $v_i$ are bases vectors in column form.
Since you know $v_i$'s are linearly independent you know $A=[v_i]$ is an invertible matrix, and inverse can easily be found using Gaussian elimination - so you can easily find $a$, and you are done
Let's restrict our attention to subspaces $V$ of $\mathbb{R}^3$ rather than $\mathbb{R}^n$. Once this case is understood, you can try to generalize it. It is important to think slowly from the definitions. Geometric intuition will come afterwards (and be correct). I will not recall the definition of orthogonal projection onto a subspace for you, you can look that up in your notes/textbook.
You appear to be confusing several concepts. Let me try to clarify them for you.
Fix a subspace $V \subseteq \mathbb{R}^3$ (this could be the origin, a line through the origin, a plane containing the origin, or the entire space $\mathbb{R}^3$). Let $T_V\colon \mathbb{R}^3 \rightarrow \mathbb{R}^3$ be the linear transformation defined by orthogonal projection onto the subspace $V$.
Any linear transformation has a kernel and an image. They are defined for $T_V$ as follows:
$$\text{image}(T_V) = \left\{ y \in \mathbb{R}^3 \colon \exists x \in \mathbb{R}^3 \text{ such that } T_V(x) = y \right\} $$
$$\text{kernel}(T_V) = \left\{ x \in \mathbb{R}^3 \colon T_V(x) = 0 \right\}$$
(you may note that both the image and the kernel of $T_V$ are subspaces of $\mathbb{R}^3$).
From the first definition, we can explain that $$\text{image}(T_V) = V.$$ The proof uses two key facts: the definition of the image of a linear transformation, and the definition of the map $T_V$.
Proof that $\text{image}(T_V) = V$: In order to do this, we show that $\text{image}(T_V) \subseteq V$ and $V \subseteq \text{image}(T_V)$:
For any vector $x \in \mathbb{R}^3$, the orthogonal projection of $x$ onto $V$ is an element of $V$. Thus $\text{image}(T_V) \subseteq V$.
On the other hand, if $x$ is an element of $V$, then $T_V(x) = x$ (the orthogonal projection of a vector in $V$ onto $V$ is itself), so $V\subseteq \text{image}(T_V)$. This completes the proof. $\square$
Thus:
- If $V$ is a line in $\mathbb{R}^3$, then $\text{image}(T_V)$ is the same line.
- If $V$ is a plane in $\mathbb{R}^3$, then $\text{image}(T_V)$ is the same plane.
- If $V$ is the entire space $\mathbb{R}^3$, then $\text{image}(T_V)$ is the entire space $\mathbb{R}^3$.
Now we would like to describe the second space, $\text{kernel}(T_V)$. In order to do this, it is useful to recall that the orthogonal complement of a subspace $V$ is a new subspace defined in the following way:
$$V^{\perp} = \left\{ y\in \mathbb{R}^3 : \forall x\in V, \langle x,y\rangle = 0 \right\}.$$
In plain English, $V^{\perp}$ is the set of all vectors that are orthogonal to every vector in $V$.
You should think about why the following statements are true (note that tehy only make sense if $V$ is a subspace of $\mathbb{R}^3$):
- If $V$ is a line, then $V^{\perp}$ is a plane.
- If $V$ is a plane, then $V^{\perp}$ is a line.
- If $V$ is the origin, then $V^{\perp}$ is the entire space $\mathbb{R}^3$.
- If $V$ is the entire space $\mathbb{R}^3$, then $V^{\perp}$ is the origin.
You should also try to draw pictures of some examples.
The following statement contains the intuition you are after. I will leave the proof of this to you.
$$V^{\perp} = \text{kernel}(T_V).$$
Best Answer
Note that $$A\cdot\underbrace{ \begin{bmatrix} 1& 0& 1\\ 0& 1& 1\\ -1&-1&-1 \end{bmatrix}}_X= \underbrace{\begin{bmatrix} 1& 0& 0\\ 0& 1& 0\\ -1&-1& 0 \end{bmatrix}}_Y,$$ (left-multiplying by) $A$ sends the first two columns of $X$ to themselves (because they are in the plane) and the third column to zero (because it's normal to the plane). It thus has rank $2$ and nullity $1$. The range is spanned by the first two vectors, and is the plane (of dimension two), while the null space is spanned by the normal line (of dimension one) through the origin to the plane. Also, the matrix of the transformation could be obtained by multiplying both sides on the right by the inverse of the left hand matrix: $$A=\begin{bmatrix} 1& 0& 0\\ 0& 1& 0\\ -1&-1& 0 \end{bmatrix}\cdot \begin{bmatrix} 1& 0& 1\\ 0& 1& 1\\ -1&-1&-1 \end{bmatrix}^{-1} =YX^{-1} $$ or by row reducing $[X^t|Y^t]$ to $[I|A^t]$: $$ \left[ \begin{array}{rrr|rrr} 1 & 0 &-1 & 1 & 0 &-1\\ 0 & 1 &-1 & 0 & 1 &-1\\ 1 & 1 &-1 & 0 & 0 & 0\\ \end{array} \right] \sim\left[ \begin{array}{rrr|rrr} 1 & 0 & 0 & 0 &-1 & 1\\ 0 & 1 & 0 &-1 & 0 & 1\\ 0 & 0 & 1 &-1 &-1 & 2\\ \end{array} \right]$$ so that $$A= \begin{bmatrix} 0&-1&-1\\ -1& 0&-1\\ 1& 1& 2\\ \end{bmatrix}. $$