There is no point in trying to visualise this since the row space of $A$ and its null space are not naturally subspaces of the same space. If $A$ corresponds to a linear map $V\to W$, then the null space of $A$ corresponds to the kernel $\ker(f)$, a subspace of$~V$. Each row of $A$ computes one coordinate of the image of $f$, so it is a linear form on $V$ (a linear function $V\to\Bbb R$). The row space is defined by the linear combinations of these rows, so it is naturally a subspace of the space $V^*$ of linear forms on$~V$. This space $V^*$ not the same space as$~V$, even though it has the same dimension; it is called its dual vector space. The row space is then a subspace $R\subseteq V^*$, and the natural statement of what the cite passage is saying is that $\ker(f)$ is the set of vectors where all linear forms of$~R$ are zero simultaneously. This is quite unsurprising, since the null $\ker(f)$ is by definition the set where the linear forms corresponding to the rows of $A$ all are zero (the intersection of their zero sets), but once this happens for some $v\in V$, any linear combination of those linear forms is also zero at$~v$.
Now imagining combinations of linear forms on $V$ might be a bit hard, so one may just imagine that $V$ has an inner product for which the basis that one was using (to define the matrix $A$ of $f$) is orthonormal. This inner product is artificial and has no clear meaning, but at least it allows to represent each linear forms as the inner product with one specific vector$~v_i$. This representing of linear forms by vectors is what goes on in the operation of transposition. Then row$~i$ of $A$ corresponds to (the inner product with)$~v_i$, and the row space corresponds to the span of $v_1,\ldots,v_m$. Now the subspace where the linear form for row$~i$ of$~A$ is zero is the set of vectors orthogonal to$~v_i$. Then $\ker(f)$ is the set of vectors where this happens for all rows at once, i.e., the subspace of vectors orthogonal to $v_1,\ldots,v_m$, which is the orthogonal complement of the space of $v_1,\ldots,v_m$. This is a way to visualise, but remember that we are just saying that all linear forms in $R$ are zero simultaneously.
Note that matrix multiplication can be defined via dot products. In particular, suppose that $A$ has rows $a_1$, $a_2, \dots, a_n$, then for any vector $x = (x_1,\dots,x_n)^T$, we have:
$$
Ax = (a_1 \cdot x, a_2 \cdot x, \dots, a_n \cdot x)
$$
Now, if $x$ is in the null-space, then $Ax = \vec 0$. So, if $x$ is in the null-space of $A$, then $x$ must be orthogonal to every row of $A$, no matter what "combination of $A$" you've chosen.
Best Answer
Yes, if we interpret linear functionals in the usual way for matrices (dot product). This is one of the so-called "fundamental theorems" for linear algebra; see this article. In particular, if $A$ is a matrix (viewed as a linear transformation between Euclidean vector spaces), then
$$ R(A^T) = N(A)^{\perp}, $$ where $R(A^T)$ is the range of the transpose of $A$, which is the same as the row space, $N(A)$ is the null space, and the symbol $\perp$ indicates the orthogonal complement, which is the same as the annihilator when using the dot product as the realization of the dual space.
To prove this: If $x \in R(A^T)$, then $x = A^T z$ for some vector $z$. Now take any $y \in N(A)$. Then
$$ \langle x,y \rangle = x^T y = z^T A y = z^T (0) = 0, $$ where we used the fact that $Ay = 0$ since $y \in N(A)$. This proves that each element of $R(A^T)$ is orthogonal to every element of $N(A)$; that is, $R(A^T) \subseteq N(A)^{\perp}$.
To get equality, it suffices to prove that the two subspaces have equal dimension.