Here's a simple argument that I think agrees with the initial discussion on this question some years ago, to the effect that the stress tensor quoted in the original question is not in agreement with the result and in fact should be the transpose of what's shown.
Inner product of del with stress tensor: $\nabla.$T
$\nabla=(\partial_{x}\mathbf{i}+\partial_{y}\mathbf{j}+\partial_{z}\mathbf{k)}$,
and $\textbf{T }$is the second order stress tensor $\tau_{ij}$ with
components $\left(\begin{array}{ccc}
\tau_{11} & \tau_{12} & \tau_{13}\\
\tau_{21} & \tau_{22} & \tau_{23}\\
\tau_{31} & \tau_{32} & \tau_{33}
\end{array}\right)$, which can also be expressed as $\tau_{11}\mathbf{ii}+\tau_{12}\mathbf{ij}+\tau_{13}\mathbf{ik}+\tau_{21}\mathbf{ji}+\tau_{22}\mathbf{jj}+\tau_{23}\mathbf{jk}+\tau_{31}\mathbf{ki}+\tau_{32}\mathbf{kj}+\tau_{33}\mathbf{kk}$
Using the rule that for the vector $\textbf{a }$ and dyad (second order
tensor) $\textbf{bc }$(the product of vectors $\textbf{b }$and $\textbf{c}$)
we have $\textbf{a.(bc) = (a.b)c}$, then:
$$
\nabla.\mathbf{T=}(\partial_{x}\mathbf{i}+\partial_{y}\mathbf{j}+\partial_{z}\mathbf{k)}.\left(\tau_{11}\mathbf{ii}+\tau_{12}\mathbf{ij}+\tau_{13}\mathbf{ik}+\tau_{21}\mathbf{ji}+\tau_{22}\mathbf{jj}+\tau_{23}\mathbf{jk}+\tau_{31}\mathbf{ki}+\tau_{32}\mathbf{kj}+\tau_{33}\mathbf{kk}\right)
$$
$$
=\left(\partial_{x}\mathbf{i}.\tau_{11}\mathbf{ii}\right)+\left(\partial_{x}\mathbf{i}.\tau_{12}\mathbf{ij}\right)+...+\left(\partial_{x}\mathbf{i}.\tau_{33}\mathbf{kk}\right)+\left(\partial_{y}\mathbf{j}.\tau_{11}\mathbf{ii}\right)+\left(\partial_{y}\mathbf{j}.\tau_{12}\mathbf{ij}\right)+...+\left(\partial_{y}\mathbf{j}.\tau_{33}\mathbf{kk}\right)+\left(\partial_{z}\mathbf{k}.\tau_{11}\mathbf{ii}\right)+\left(\partial_{z}\mathbf{k}.\tau_{12}\mathbf{ij}\right)+...+\left(\partial_{z}\mathbf{k}.\tau_{33}\mathbf{kk}\right)
$$
$$
=\left(\partial_{x}\tau_{11}\mathbf{\left(i.i\right)i}\right)+\left(\partial_{x}\tau_{12}\mathbf{\left(i.i\right)j}\right)+...+\left(\partial_{x}\tau_{33}\mathbf{\left(i.k\right)k}\right)+\left(\partial_{y}\tau_{11}\mathbf{\left(j.i\right)i}\right)+\left(\partial_{y}\tau_{12}\mathbf{\left(j.i\right)j}\right)+...+\left(\partial_{y}\tau_{33}\mathbf{\left(j.k\right)k}\right)+\left(\partial_{z}\tau_{11}\mathbf{\left(k.i\right)i}\right)+\left(\partial_{z}\tau_{12}\mathbf{\left(k.i\right)j}\right)+...+\left(\partial_{z}\tau_{33}\mathbf{\left(k.k\right)k}\right)
$$
And all of the inner products are zero apart from $\mathbf{i.i}$,
$\mathbf{j.j}$ and $\mathbf{k.k}$ which equal 1, so the above reduces
to:
$$
=\partial_{x}\tau_{11}\mathbf{i}+\partial_{x}\tau_{12}\mathbf{j}+\partial_{x}\tau_{13}\mathbf{k}+\partial_{y}\tau_{21}\mathbf{i}+\partial_{y}\tau_{22}\mathbf{j}+\partial_{y}\tau_{23}\mathbf{k}+\partial_{z}\tau_{31}\mathbf{i}+\partial_{z}\tau_{32}\mathbf{j}+\partial_{z}\tau_{33}\mathbf{k}
$$
$$
=\left(\partial_{x}\tau_{11}+\partial_{y}\tau_{21}+\partial_{z}\tau_{31}\right)\mathbf{i}+\left(\partial_{x}\tau_{12}+\partial_{y}\tau_{22}+\partial_{z}\tau_{32}\right)\mathbf{j}+\left(\partial_{x}\tau_{13}+\partial_{y}\tau_{23}+\partial_{z}\tau_{33}\right)\mathbf{k}
$$
Really, there isn't a notation that is more correct. It is just a matter of convention. All of them mean the operation $\sum_{i = 1}^n a_ib_i$. The important thing is that you understand what you must do. Like you said yourself, in $\mathbf{A \cdot B^T}$, we see $\mathbf{A}$ and $\mathbf{B}$ as row vectors. The $\mathbf{^T}$ serves just to remind you that you can see the dot product as a matrix multiplication, after all, we will have a $1 \times n$ matrix times a $n \times 1$, which is well defined, and gives as result a $1 \times 1$ matrix, i.e., a number.
The notation $\mathbf{A \cdot B}$ doesn't sugest any of these things, and you can think directly of the termwise multiplication, then sum.
In Linear Algebra, we often talk about inner products in arbitrary vector spaces, a sort of generalization of the dot product. Given vectors $\mathbf{A}$ and $\mathbf{B}$, a widely used notation is $\langle \mathbf{A}, \mathbf{B} \rangle$. An inner product (in a real vector space), put simply, is a symmetric bilinear form (form means that the result is a number), which is positive definite. That means:
i) $\langle \mathbf{A}, \mathbf{B} \rangle=\langle \mathbf{B}, \mathbf{A} \rangle $;
ii) $\langle \mathbf{A} + \lambda \mathbf{B}, \mathbf{C} \rangle = \langle \mathbf{A}, \mathbf{C} \rangle + \lambda \langle \mathbf{B}, \mathbf{C} \rangle$ ;
iii) $\langle \mathbf{A}, \mathbf{A} \rangle > 0 $ if $\mathbf{A} \neq \mathbf{0}$
I, particularly, don't like the notation $\mathbf{A \cdot B^T}$, because when working in more general spaces than $\Bbb R^n$, we don't always have a finite dimension, so matrices don't work so well. I never saw a notation different from those three I talked about. But I enforce what I said at the beginning: there isn't a correct notation, but you should be used to all of them, as possible.
Best Answer
I believe you're supposed to "vectorize" the matrix, i.e. rearrange into a $n^2 \times 1$ vector.
Equivalently, you can take $A\cdot B = \textrm{tr}(A^TB)$ as the definition.
EDIT - Example:
If $A = \left(\begin{array}{cc}a & b \\ c & d \end{array}\right)$, $B = \left(\begin{array}{cc}e & f \\ g & h \end{array}\right)$, then
$$ A^TB = \left(\begin{array}{cc}ae + cg & af + ch \\ be + dg & bf + dh \end{array}\right) $$
and the trace is $ae + bf + cg + dh$. Likewise, if we first vectorize the matrices $$ \widetilde{A} = \left(\begin{array}{c}a & b & c & d \end{array}\right)^T\\ \widetilde{B} = \left(\begin{array}{c}e & f & g & h \end{array}\right)^T\\ $$
it's straightforward to see $\widetilde{A}\cdot\widetilde{B} = \textrm{tr}(A^TB)$.