Part (a): By definition, the null space of the matrix $[L]$ is the space of all vectors that are sent to zero when multiplied by $[L]$. Equivalently, the null space is the set of all vectors that are sent to zero when the transformation $L$ is applied. $L$ transforms all vectors in its null space to the zero vector, no matter what transformation $L$ happens to be.
Note that in this case, our nullspace will be $V^\perp$, the orthogonal complement to $V$. Can you see why this is the case geometrically?
Part (b): In terms of transformations, the column space $L$ is the range or image of the transformation in question. In other words, the column space is the space of all possible outputs from the transformation. In our case, projecting onto $V$ will always produce a vector from $V$ and conversely, every vector in $V$ is the projection of some vector onto $V$. We conclude, then, that the column space of $[L]$ will be the entirety of the subspace $V$.
Now, what happens if we take a vector from $V$ and apply $L$ (our projection onto $V$)? Well, since the vector is in $V$, it's "already projected"; flattening it onto $V$ doesn't change it. So, for any $x$ in $V$ (which is our column space), we will find that $L(x) = x$.
Part (c): The rank is the dimension of the column space. In this case, our column space is $V$. What's it's dimension? Well, it's the span of two linearly independent vectors, so $V$ is 2-dimensional. So, the rank of $[L]$ is $2$.
We know that the nullity is $V^\perp$. Since $V$ has dimension $2$ in the $4$-dimensional $\Bbb R^4$, $V^\perp$ will have dimension $4 - 2 = 2$. So, the nullity of $[L]$ is $2$.
Alternatively, it was enough to know the rank: the rank-nullity theorem tells us that since the dimension of the overall (starting) space is $4$ and the rank is $2$, the nullity must be $4 - 2 = 2$.
A matrix is not just as an array of numbers. It is helpful to think of it as a device for takes a vector as input and produces another vector as output by multiplication: that is for input $v$, the output is $Av$. This output is obtained by taking linear combination of column vectors of $A$, the coefficients for the linear combination being provided by the components of the vector $v$. So the output belongs to the column space.
It is possible that $Av$ is the zero vector,in that case $v$ is said to be in the nullspace.
For left multiplication $vA$, again one has similar interpretation, but everything in terms of rows of $A$ instead of the columns.
Now look at a matrix like $A=\pmatrix{1 & 2 & 3\cr 1 & 2 & 3 \cr 1 & 2 & 3\cr}$. Any scalar multiple every column vector is of the form $(x, x, x)^T$, and so linear combination would also be of the same type. So column space consists exclusively of vectors of the kind $(x,x,x)^T$. But any vector in the row space of $A$ is clearly of the form $(y, 2y, 3y)^T$. So column space and row space have nothing in common except the zero vector.
When the rank is 3, the columns form a basis for $\mathbf{R}^3$, and so every vector
is in the column space, including those in the row space, and vice versa.
When the matrix is symmetric then also we cna check row space and column spaces coincide. Ohterwise they don't. Only thing we can say is those spaces have the *same dimensions$ which is much different from saying they are the same.
Best Answer
If you think of the left null space of $M$ as consisting of row vectors, and you also think of the elements of the annihilator of a subspace as being row vectors, then the left null space of $M$ is the annihilator of the range of $M$.
Here is a proof. Let $z$ be a column vector in $\mathbb R^m$. Then \begin{align} z^T \in R(M)^\circ &\iff z^T Mx = 0 \quad \text{for all } x \in \mathbb R^n\\ &\iff (M^T z)^T x = 0 \quad \text{for all } x \in \mathbb R^n\\ &\iff M^T z = 0 \\ &\iff z^T \text{ is in the left null space of $M$.} \end{align}
Here is a generalization of the above fact.
Let $V$ and $W$ be finite dimensional vector spaces over a field $F$, and let $T:V \to W$ be a linear transformation. Let $V^*$ and $W^*$ be the dual spaces of $V$ and $W$ (respectively) and let $T^*:W^* \to V^*$ be the dual of $T$, defined by $$ \langle T^*z, x \rangle = \langle z, Tx \rangle. $$ Then the annihilator of the range of $T$ is the null space of $T^*$: $$ R(T)^\circ = N(T^*). $$ This is a generalization of the "four subspaces theorem" emphasized in Gilbert Strang's linear algebra books. (This theorem is sometimes called the "fundamental theorem of linear algebra".)
Here's a proof: \begin{align} z \in R(T)^\circ &\iff \langle z, Tx \rangle = 0 \quad \text{for all } x \in V\\ &\iff \langle T^* z, x \rangle = 0 \quad \text{for all } x \in V \\ &\iff T^*z = 0 \\ &\iff z \in N(T^*). \end{align}