Why not just reduce the matrix ? its not more work and often makes things easier.
\begin{bmatrix}
1&0&0&0
\\0&1&0&0
\\0&0&1&0
\end{bmatrix}
1) We see that the row and column rank (they are always equal) are $3$.
The nullity $=n-$rank$=4-3=1$.
To find the null space we have to solve $A\mathbf{x}=0$, and this is easy now in row reduced form $\mathbf{x}=\begin{bmatrix}
0\\0\\0\\a
\end{bmatrix}$.
2) The row space has dimension $3$ as mentioned, for the basis one can take:
$$(1,0,0,0)$$
$$(0,1,0,0)$$
$$(0,0,1,0)$$
Or one could take the rows of the original matrix, since the rank is $3$.
3) The column rank is also $3$ row reduction has not changed the column vectors, just expressed them in a different basis so a basis for the column space will be the first $3$ vectors of the original matrix (corresponding to the pivot position of the reduced matrix):
$$\begin{bmatrix}
1\\3\\-1
\end{bmatrix},
\begin{bmatrix}
1\\-1\\2
\end{bmatrix},
\begin{bmatrix}
-1\\2\\-4\\
\end{bmatrix}. $$
Part (a): By definition, the null space of the matrix $[L]$ is the space of all vectors that are sent to zero when multiplied by $[L]$. Equivalently, the null space is the set of all vectors that are sent to zero when the transformation $L$ is applied. $L$ transforms all vectors in its null space to the zero vector, no matter what transformation $L$ happens to be.
Note that in this case, our nullspace will be $V^\perp$, the orthogonal complement to $V$. Can you see why this is the case geometrically?
Part (b): In terms of transformations, the column space $L$ is the range or image of the transformation in question. In other words, the column space is the space of all possible outputs from the transformation. In our case, projecting onto $V$ will always produce a vector from $V$ and conversely, every vector in $V$ is the projection of some vector onto $V$. We conclude, then, that the column space of $[L]$ will be the entirety of the subspace $V$.
Now, what happens if we take a vector from $V$ and apply $L$ (our projection onto $V$)? Well, since the vector is in $V$, it's "already projected"; flattening it onto $V$ doesn't change it. So, for any $x$ in $V$ (which is our column space), we will find that $L(x) = x$.
Part (c): The rank is the dimension of the column space. In this case, our column space is $V$. What's it's dimension? Well, it's the span of two linearly independent vectors, so $V$ is 2-dimensional. So, the rank of $[L]$ is $2$.
We know that the nullity is $V^\perp$. Since $V$ has dimension $2$ in the $4$-dimensional $\Bbb R^4$, $V^\perp$ will have dimension $4 - 2 = 2$. So, the nullity of $[L]$ is $2$.
Alternatively, it was enough to know the rank: the rank-nullity theorem tells us that since the dimension of the overall (starting) space is $4$ and the rank is $2$, the nullity must be $4 - 2 = 2$.
Best Answer
A matrix is not just as an array of numbers. It is helpful to think of it as a device for takes a vector as input and produces another vector as output by multiplication: that is for input $v$, the output is $Av$. This output is obtained by taking linear combination of column vectors of $A$, the coefficients for the linear combination being provided by the components of the vector $v$. So the output belongs to the column space.
It is possible that $Av$ is the zero vector,in that case $v$ is said to be in the nullspace.
For left multiplication $vA$, again one has similar interpretation, but everything in terms of rows of $A$ instead of the columns.
Now look at a matrix like $A=\pmatrix{1 & 2 & 3\cr 1 & 2 & 3 \cr 1 & 2 & 3\cr}$. Any scalar multiple every column vector is of the form $(x, x, x)^T$, and so linear combination would also be of the same type. So column space consists exclusively of vectors of the kind $(x,x,x)^T$. But any vector in the row space of $A$ is clearly of the form $(y, 2y, 3y)^T$. So column space and row space have nothing in common except the zero vector.
When the rank is 3, the columns form a basis for $\mathbf{R}^3$, and so every vector is in the column space, including those in the row space, and vice versa.
When the matrix is symmetric then also we cna check row space and column spaces coincide. Ohterwise they don't. Only thing we can say is those spaces have the *same dimensions$ which is much different from saying they are the same.