Part (a): By definition, the null space of the matrix $[L]$ is the space of all vectors that are sent to zero when multiplied by $[L]$. Equivalently, the null space is the set of all vectors that are sent to zero when the transformation $L$ is applied. $L$ transforms all vectors in its null space to the zero vector, no matter what transformation $L$ happens to be.
Note that in this case, our nullspace will be $V^\perp$, the orthogonal complement to $V$. Can you see why this is the case geometrically?
Part (b): In terms of transformations, the column space $L$ is the range or image of the transformation in question. In other words, the column space is the space of all possible outputs from the transformation. In our case, projecting onto $V$ will always produce a vector from $V$ and conversely, every vector in $V$ is the projection of some vector onto $V$. We conclude, then, that the column space of $[L]$ will be the entirety of the subspace $V$.
Now, what happens if we take a vector from $V$ and apply $L$ (our projection onto $V$)? Well, since the vector is in $V$, it's "already projected"; flattening it onto $V$ doesn't change it. So, for any $x$ in $V$ (which is our column space), we will find that $L(x) = x$.
Part (c): The rank is the dimension of the column space. In this case, our column space is $V$. What's it's dimension? Well, it's the span of two linearly independent vectors, so $V$ is 2-dimensional. So, the rank of $[L]$ is $2$.
We know that the nullity is $V^\perp$. Since $V$ has dimension $2$ in the $4$-dimensional $\Bbb R^4$, $V^\perp$ will have dimension $4 - 2 = 2$. So, the nullity of $[L]$ is $2$.
Alternatively, it was enough to know the rank: the rank-nullity theorem tells us that since the dimension of the overall (starting) space is $4$ and the rank is $2$, the nullity must be $4 - 2 = 2$.
$A$ has smaller left-nullity than $AB$ but not smaller (right)-nullity. For example, if
$$ A = \begin{pmatrix} 1 & 1 \end{pmatrix}, B = \begin{pmatrix} 1 \\ 1 \end{pmatrix}, $$
then $\operatorname{nullity}(A) = 1$ but $\operatorname{nullity}(AB) = 0$.
You can see what the rank-nullity theorem says: if the dimension of $A$ is $m \times n$ and $B$ is $n \times p$ then
$$\operatorname{rank}(A) + \operatorname{nullity}(A) = n,$$
$$\operatorname{rank}(B) + \operatorname{nullity}(B) = p,$$
$$\operatorname{rank}(AB) + \operatorname{nullity}(AB) = p.$$
So we are able to relate the nullity of $B$ to that of $AB$ because they are both related to $p$ but the nullity of $A$ is only related to $n$. If $\operatorname{nullity}(A) > p$ then $\operatorname{nullity}(A) > \operatorname{nullity}(AB)$ since $\operatorname{nullity}(AB) \le p$.
If $A$ and $B$ are square matrices then the inequality is always true: $\operatorname{nullity}(A) \le \operatorname{nullity}(AB)$. One way to see this is to observe that the rank decreases: $\operatorname{rank}(AB) \le \operatorname{rank}(A)$.
Best Answer
This follows directly from the associativity of matrix multiplication. Matrix multiplication is equivalent to composition of linear operators, which is clearly associative.