Put
$$
u = \begin{pmatrix}1 \\ 0 \\ -1 \end{pmatrix}, \quad
v = \begin{pmatrix}2 \\ 1 \\ 3 \end{pmatrix}, \quad
w = \begin{pmatrix}1 \\ -1 \\ 1 \end{pmatrix}.
$$
We compute $Aw = 0$ and conclude that $w\in ker\ L_A.$ This implies $\dim\ ker\ L_A \geq 1$, since $w \neq 0,$ and further $\dim\ im\ L_A \leq 2$, since $3 = \dim\ im\ L_A + \dim\ ker\ L_A.$ Now, $u$ is the first column of $A$ and $v$ is the second column of $A$, so certainly $u,v \in im\ L_A.$ We also have that $u$ and $v$ are linearly independent. One way to see this is to note that the top left 2 by 2 determinant of $A$ is nonzero:
$$
\begin{vmatrix} 1 & 2 \\ 0 & 1 \end{vmatrix} = 1.
$$
This implies $\dim\ im\ L_A \geq 2,$ and so all in all we find $\dim\ im\ L_A = 2$ and $\dim\ ker\ L_A = 1.$ So indeed, $\{w\}$ is a basis of $ker\ L_A$ and $\{u,w\}$ is a basis of $im\ L_A.$
Next, we show that $u,v,w$ are all linearly independent. If this were not true, then $w$ would have to be a linear combination of $u$ and $v$, since $u$ and $v$ are linearly independent. So we write $w = \lambda u + \mu v$ and try to determine $\lambda$ and $\mu.$ By looking at the second coordinate we see that necessarily $\mu = -1,$ so we must have $w = \lambda u - v.$ This implies that
$$
w + v = \begin{pmatrix}3 \\ 0 \\ 4 \end{pmatrix}
$$
must be a multiple of $u$. But obviously, this is not true. So indeed $u,v,w$ are all linearly independent. This means that $ker\ L_A \cap im\ L_A = 0$ and $ker\ L_A + im\ L_A = \mathbb R^3_{col}.$ In particular, we can take $\{u,v,w\}$ as a basis for $ker\ L_A + im\ L_A.$
Part (a): By definition, the null space of the matrix $[L]$ is the space of all vectors that are sent to zero when multiplied by $[L]$. Equivalently, the null space is the set of all vectors that are sent to zero when the transformation $L$ is applied. $L$ transforms all vectors in its null space to the zero vector, no matter what transformation $L$ happens to be.
Note that in this case, our nullspace will be $V^\perp$, the orthogonal complement to $V$. Can you see why this is the case geometrically?
Part (b): In terms of transformations, the column space $L$ is the range or image of the transformation in question. In other words, the column space is the space of all possible outputs from the transformation. In our case, projecting onto $V$ will always produce a vector from $V$ and conversely, every vector in $V$ is the projection of some vector onto $V$. We conclude, then, that the column space of $[L]$ will be the entirety of the subspace $V$.
Now, what happens if we take a vector from $V$ and apply $L$ (our projection onto $V$)? Well, since the vector is in $V$, it's "already projected"; flattening it onto $V$ doesn't change it. So, for any $x$ in $V$ (which is our column space), we will find that $L(x) = x$.
Part (c): The rank is the dimension of the column space. In this case, our column space is $V$. What's it's dimension? Well, it's the span of two linearly independent vectors, so $V$ is 2-dimensional. So, the rank of $[L]$ is $2$.
We know that the nullity is $V^\perp$. Since $V$ has dimension $2$ in the $4$-dimensional $\Bbb R^4$, $V^\perp$ will have dimension $4 - 2 = 2$. So, the nullity of $[L]$ is $2$.
Alternatively, it was enough to know the rank: the rank-nullity theorem tells us that since the dimension of the overall (starting) space is $4$ and the rank is $2$, the nullity must be $4 - 2 = 2$.
Best Answer
Suppose the matrix $A$ is $m\times n$ and has rank $k$.
Thus $A$ defines a linear map $L_A\colon \mathbb{R}^n\to \mathbb{R}^m$. The null space has dimension $n-k$ and the image has dimension $k$.
The cokernel $\mathbb{R}^{m}/\operatorname{im}L_A$ has dimension $m-k$. The coimage $\mathbb{R}^n/\ker L_A$ has dimension $n-(n-k)=k$.
The matrix $A$ also defines a linear map $R_A\colon \mathbb{R}_m\to \mathbb{R}_n$ (spaces of row vectors) by $y\mapsto yA$. The image of this linear map has dimension $k$ and the kernel has dimension $m-k$.
Hey! The dimensions agree! Maybe we can find a “canonical” isomorphism $$ f\colon \ker R_A\to \operatorname{coker}L_A $$ In other words, we'd like to associate to each vector $y\in\ker R_A$ a unique element in $F^m/\operatorname{im}L_A$.
Let $y\in\ker R_A$, so $y^T\in \mathbb{R}^m$. The natural choice would be defining $f(y)=y^T+\operatorname{im}L_A$.
This is clearly a linear map. Is it injective? Suppose $y\in\ker f$. This means $y^T=Ax$ for some $x\in F^n$. Also $yA=0$ by definition, so $$ 0=A^Ty^T=A^TAx $$ and so $x^TA^TAx=0$ and therefore $x=0$; hence $y^T=Ax=0$. Yes! The map $f$ is injective. The dimensions coincide, so $f$ is indeed an isomorphism.
Similarly you can find an isomorphism between $\mathbb{R}^n/\ker L_A$ and $\operatorname{im}R_A$.
Note however that these isomorphisms require that $x^TA^TAx=0$ implies $x=0$. This is not true over general fields.