There is no point in trying to visualise this since the row space of $A$ and its null space are not naturally subspaces of the same space. If $A$ corresponds to a linear map $V\to W$, then the null space of $A$ corresponds to the kernel $\ker(f)$, a subspace of$~V$. Each row of $A$ computes one coordinate of the image of $f$, so it is a linear form on $V$ (a linear function $V\to\Bbb R$). The row space is defined by the linear combinations of these rows, so it is naturally a subspace of the space $V^*$ of linear forms on$~V$. This space $V^*$ not the same space as$~V$, even though it has the same dimension; it is called its dual vector space. The row space is then a subspace $R\subseteq V^*$, and the natural statement of what the cite passage is saying is that $\ker(f)$ is the set of vectors where all linear forms of$~R$ are zero simultaneously. This is quite unsurprising, since the null $\ker(f)$ is by definition the set where the linear forms corresponding to the rows of $A$ all are zero (the intersection of their zero sets), but once this happens for some $v\in V$, any linear combination of those linear forms is also zero at$~v$.
Now imagining combinations of linear forms on $V$ might be a bit hard, so one may just imagine that $V$ has an inner product for which the basis that one was using (to define the matrix $A$ of $f$) is orthonormal. This inner product is artificial and has no clear meaning, but at least it allows to represent each linear forms as the inner product with one specific vector$~v_i$. This representing of linear forms by vectors is what goes on in the operation of transposition. Then row$~i$ of $A$ corresponds to (the inner product with)$~v_i$, and the row space corresponds to the span of $v_1,\ldots,v_m$. Now the subspace where the linear form for row$~i$ of$~A$ is zero is the set of vectors orthogonal to$~v_i$. Then $\ker(f)$ is the set of vectors where this happens for all rows at once, i.e., the subspace of vectors orthogonal to $v_1,\ldots,v_m$, which is the orthogonal complement of the space of $v_1,\ldots,v_m$. This is a way to visualise, but remember that we are just saying that all linear forms in $R$ are zero simultaneously.
Part (a): By definition, the null space of the matrix $[L]$ is the space of all vectors that are sent to zero when multiplied by $[L]$. Equivalently, the null space is the set of all vectors that are sent to zero when the transformation $L$ is applied. $L$ transforms all vectors in its null space to the zero vector, no matter what transformation $L$ happens to be.
Note that in this case, our nullspace will be $V^\perp$, the orthogonal complement to $V$. Can you see why this is the case geometrically?
Part (b): In terms of transformations, the column space $L$ is the range or image of the transformation in question. In other words, the column space is the space of all possible outputs from the transformation. In our case, projecting onto $V$ will always produce a vector from $V$ and conversely, every vector in $V$ is the projection of some vector onto $V$. We conclude, then, that the column space of $[L]$ will be the entirety of the subspace $V$.
Now, what happens if we take a vector from $V$ and apply $L$ (our projection onto $V$)? Well, since the vector is in $V$, it's "already projected"; flattening it onto $V$ doesn't change it. So, for any $x$ in $V$ (which is our column space), we will find that $L(x) = x$.
Part (c): The rank is the dimension of the column space. In this case, our column space is $V$. What's it's dimension? Well, it's the span of two linearly independent vectors, so $V$ is 2-dimensional. So, the rank of $[L]$ is $2$.
We know that the nullity is $V^\perp$. Since $V$ has dimension $2$ in the $4$-dimensional $\Bbb R^4$, $V^\perp$ will have dimension $4 - 2 = 2$. So, the nullity of $[L]$ is $2$.
Alternatively, it was enough to know the rank: the rank-nullity theorem tells us that since the dimension of the overall (starting) space is $4$ and the rank is $2$, the nullity must be $4 - 2 = 2$.
Best Answer
Well, to me your attempt is as intuitive as it gets. Just a remark: you assumed that $m\neq n$ to conclude that $x$ can't be in the column space or left null space, which a priori is not given (at least, it wasn't stated in the question). But this is ok, since, if $m=n$ and $A$ has independent columns, $A$ would be an invertible square matrix, and $A^TAx=A^Tb\implies Ax=b$, but, since $b$ is assumed not to be in the column space of $A$, this case doesn't happen.
Maybe what you're asking for is a more explicit explanation? If so, Let $A=\begin{bmatrix}L_1 \\ \vdots\\ L_m\end{bmatrix}$, where $L_i$ is the $i$-th line, a $1\times n$ vector. If $b=\begin{bmatrix} b_1 \\ \vdots\\ b_m\end{bmatrix}$, we have that
\begin{align*} A^TAx=A^Tb&\implies \begin{bmatrix} L_1 & \cdots & L_m\end{bmatrix}\begin{bmatrix}L_1 \\ \vdots\\ L_m\end{bmatrix}\textbf{x}=\begin{bmatrix} L_1 & \cdots & L_m\end{bmatrix}\begin{bmatrix} b_1 \\ \vdots\\ b_m\end{bmatrix}\\ &\implies(L_1\cdot L_1+\dots+L_n\cdot L_n)\textbf{x}=b_1L_1+\dots+b_mL_m\\ &\implies\textbf{x}=\dfrac{1}{L_1\cdot L_1+\dots+L_n\cdot L_n}(b_1L_1+\dots+b_mL_m) \end{align*}
So $\textbf{x}$ is indeed in the row space of $A$ (notice that the denominator of the fraction is clearly non-zero).