Lemma 1: Given an $m\times n$ matrix $A,$ the null space of $A^T$ is the orthogonal complement of the column space of $A.$
Proof: Write $A=[c_1\:\cdots\:c_n]$ where the $c_j$ are the columns of $A,$ and note that for any $m$-dimensional vector $x$ we have $$A^Tx=\left[\begin{array}{c}c_1^T\\\vdots\\c_n^T\end{array}\right]x=\left[\begin{array}{c}c_1^Tx\\\vdots\\c_n^Tx\end{array}\right]=\left[\begin{array}{c}c_1\cdot x\\\vdots\\c_n\cdot x\end{array}\right].$$ Since the column space of $A$ is spanned by $c_1,...,c_n$, then $x$ is in the orthogonal complement to the column space of $A$ if and only if $x$ is orthogonal to each $c_j$ if and only if each $c_j\cdot x=0$ if and only if $A^Tx$ is the $n$-dimensional zero vector if and only if $x$ is in the null-space of $A^T.$ $\Box$
Lemma 2: Let $V,W$ be subspaces of some finite-dimensional space $X$. $V$ and $W$ have the same orthogonal complement if and only if $V=W$.
Proof: If $V=W$, then their orthogonal complements are trivially the same.
Suppose $V,W$ have the same orthogonal complement. Take $v\in V$. Since $X$ is the direct sum of $W$ and its orthogonal complement, and since $x\in V\subseteq X$, then there exist unique $w,w'$ such that $v=w+w',$ $w\in W$, and $w'$ in the orthogonal complement of $W$. Since $V,W$ have the same orthogonal complement, then $w'$ is orthogonal to $v,$ and so $$0=v\cdot w'=(w+w')\cdot w'=w\cdot w'+w'\cdot w'.\tag{$\star$}$$ Since $w'$ is in the orthogonal complement of $W$ and $w\in W$, then $w\cdot w'=0$, so it follows by $(\star)$ that $$w'\cdot w'=0.$$ Now, no non-zero vector is self-orthogonal, so $w'$ must be the zero vector, whence $v=w\in W$, and so $V\subseteq W$. By symmetrical arguments, we likewise have $W\subseteq V$, so $V=W$. $\Box$
Proposition: Given matrices $A,B$ of the same dimensions, $A$ and $B$ have the same column space if and only if $A^T$ and $B^T$ have the same reduced row echelon form.
Proof: Let $rref(M)$ indicate the reduced row echelon form of a matrix $M$. Recall that we can obtain $rref(M)$ by Gauss-Jordan elimination, which involves multiplication on the left by some finite collection of elementary matrices--that is, for any $M$, there exist elementary matrices $E_1,\cdots,E_n$ of appropriate dimension such that $rref(M)=E_n\cdots E_1M.$ This collection of elementary matrices is not unique, but that isn't important. Note, though, that elementary matrices are invertible, so it follows that the null spaces of $rref(M)$ and $M$ are the same.
Thus, $A^T$ and $B^T$ have the same reduced row echelon form if and only if they have the same null space. By Lemma 1, $A^T$ and $B^T$ have the same null space if and only if the column spaces of $A$ and $B$ have the same orthogonal complement. By Lemma 2, the column spaces of $A$ and $B$ have the same orthogonal complement if and only if the column spaces of $A$ and $B$ are the same. $\Box$
Upshot: The Proposition lets us get around needing to know what the column spaces of two matrices are, and simply determine whether they have the same column space by converting their transposes to reduced row echelon form.
It is mostly a typographical convenience and sloppiness on the part of authors and typesetters. You do want to think about vectors as column vectors when you're doing calculus and you want to think of functions $f\colon\Bbb R^n\to\Bbb R^m$ as
$$f\left(\begin{matrix}x_1\\x_2\\\vdots\\x_n\end{matrix}\right) = \begin{bmatrix} f_1(\mathbf x) \\ f_2(\mathbf x) \\ \vdots \\ f_m(\mathbf x)\end{bmatrix}\,.$$
It's particularly important to do this so that, as in your case, one doesn't get confused about rows and columns in the derivative matrix. The $i$th row of $Df(\mathbf x)$ should consist of the partial derivatives of $f_i$.
It's tedious to typeset a book like this and, honestly, it wastes a lot of room, but it saves confusion. A few rigorous multivariable calculus books I've seen (and I'm sure a few I'm not remembering) try to be careful about these, among them (1) Hubbard and Hubbard, (2) Williamson, Crowell, and Trotter, and (3) my own. (By way of a comment, I'll add that I've had complaints from a few readers of my own differential geometry notes that I reverted to standard sloppiness, and denoted vectors and functions by the more convenient $(x,y,z)$ notation.)
Best Answer
There is no standard notation. Depending on the assumed lectureship you could use ${\rm row}_i( A)$ or $ A_{\,i\,\cdot}$ for the row matrix $[a_{i1}\>a_{i2}\>\ldots \>a_{in}]$, and ${\rm col}_j( A)$ or $ A_{\,\cdot j}$ for the column matrix $$\left[\matrix{a_{1j}\cr a_{2j}\cr\vdots\cr a_{nj}\cr}\right]\ .$$ You then would have, e.g., $${\rm elm}_{ij}(AB)={\rm row}_i(A)\cdot{\rm col}_j(B)\ .$$