Really, there isn't a notation that is more correct. It is just a matter of convention. All of them mean the operation $\sum_{i = 1}^n a_ib_i$. The important thing is that you understand what you must do. Like you said yourself, in $\mathbf{A \cdot B^T}$, we see $\mathbf{A}$ and $\mathbf{B}$ as row vectors. The $\mathbf{^T}$ serves just to remind you that you can see the dot product as a matrix multiplication, after all, we will have a $1 \times n$ matrix times a $n \times 1$, which is well defined, and gives as result a $1 \times 1$ matrix, i.e., a number.
The notation $\mathbf{A \cdot B}$ doesn't sugest any of these things, and you can think directly of the termwise multiplication, then sum.
In Linear Algebra, we often talk about inner products in arbitrary vector spaces, a sort of generalization of the dot product. Given vectors $\mathbf{A}$ and $\mathbf{B}$, a widely used notation is $\langle \mathbf{A}, \mathbf{B} \rangle$. An inner product (in a real vector space), put simply, is a symmetric bilinear form (form means that the result is a number), which is positive definite. That means:
i) $\langle \mathbf{A}, \mathbf{B} \rangle=\langle \mathbf{B}, \mathbf{A} \rangle $;
ii) $\langle \mathbf{A} + \lambda \mathbf{B}, \mathbf{C} \rangle = \langle \mathbf{A}, \mathbf{C} \rangle + \lambda \langle \mathbf{B}, \mathbf{C} \rangle$ ;
iii) $\langle \mathbf{A}, \mathbf{A} \rangle > 0 $ if $\mathbf{A} \neq \mathbf{0}$
I, particularly, don't like the notation $\mathbf{A \cdot B^T}$, because when working in more general spaces than $\Bbb R^n$, we don't always have a finite dimension, so matrices don't work so well. I never saw a notation different from those three I talked about. But I enforce what I said at the beginning: there isn't a correct notation, but you should be used to all of them, as possible.
Think of 2x2 orthogonal matrix. If it's determinant 1, then it will be a matrix of the form $$
\begin{pmatrix}
cos(\theta)&-sin(\theta)\\
sin(\theta)&cos(\theta)\\
\end{pmatrix}
$$
So it transpose is
$$
\begin{pmatrix}
cos(\theta)&sin(\theta)\\
-sin(\theta)&cos(\theta)\\
\end{pmatrix}
$$
First matrix is rotation by $\theta$ counter-clockwise and another is rotation by $\theta$ clockwise, so it makes sense they are the inverse of each other.
If it's determinant -1, think of it as a reflection matrix.
Higher dimensional matrices can be thought in a similar manner I believe.
Best Answer
The idea here is that a dot-product can be interpreted as the product (as in matrix multiplication) of a row-vector and a column-vector. Note that vectors are considered to be "column-vectors" by default, so treating a vector as a row-vector means that we need to transpose it.
For example, if we have the vectors $$ \mathbf u = \pmatrix{1\\2\\3}, \quad \mathbf v = \pmatrix{-2\\0\\1}, $$ then we can express the dot-product $\mathbf u \cdot \mathbf v$ as the matrix product $$ \mathbf u \cdot \mathbf v = \mathbf u^T \mathbf v = \pmatrix{1&2&3} \pmatrix{-2\\0\\1} = (1)(-2) + (2)(0) + (3)(1) = 1. $$ Note that there is a slight (and very common) abuse of notation happening here: we have the product of a $1 \times 3$ and $3\times 1$ matrix, which results in a $1\times 1$ matrix. However, instead of writing our answer as the $1\times 1$ matrix $[1]$, we think of it as simply being the scalar $1$. In general, $1\times 1$ matrices are thought of as scalars.
From there the rest of the steps in the proof use the properties of the transpose as it relates to matrix-multiplication, then reinterprets the final $\mathbf a^T\mathbf b$ as a dot-product.