Let
$$A = \begin{pmatrix}
a_{11} & a_{12} & \cdots & a_{1n} \\
a_{21} & a_{22} & \cdots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \cdots & a_{nn}
\end{pmatrix}, ~~~~
B = \begin{pmatrix}
b_{11} & b_{12} & \cdots & b_{1n} \\
b_{21} & b_{22} & \cdots & b_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
b_{n1} & b_{n2} & \cdots & b_{nn}
\end{pmatrix}.
$$
So we have
$$\begin{align}
\operatorname{vec}(A) \operatorname{vec}(B)^T &=
\begin{pmatrix}
a_{11} \\ \vdots \\ a_{n1} \\ a_{12} \\ \vdots \\ a_{n2} \\ \vdots \\ a_{1n} \\ \vdots \\ a_{nn}
\end{pmatrix}
\begin{pmatrix}
b_{11} & \cdots & b_{n1} & b_{12} & \cdots & b_{n2} & \cdots & b_{1n} & \cdots & b_{nn}
\end{pmatrix}
\\ &=
\begin{pmatrix}
a_{11}b_{11} & a_{11}b_{21} & \cdots & a_{11}b_{nn} \\
a_{21}b_{11} & a_{21}b_{21} & \cdots & a_{21}b_{nn} \\
\vdots & \vdots & \ddots & \vdots \\
a_{nn}b_{11} & a_{nn}b_{21} & \cdots & a_{nn}b_{nn}
\end{pmatrix}
\end{align}$$
We see the rows of this $n^2 \times n^2$ matrix are $a_{ij} \operatorname{vec}(B)^T$. Taking a look at
$$A \otimes B = \begin{pmatrix}
a_{11} b_{11} & a_{11} b_{12} & \cdots & a_{11} b_{1n} &
\cdots & \cdots & a_{1n} b_{11} & a_{1n} b_{12} & \cdots & a_{1n} b_{1n} \\
a_{11} b_{21} & a_{11} b_{22} & \cdots & a_{11} b_{2n} &
\cdots & \cdots & a_{1n} b_{21} & a_{1n} b_{22} & \cdots & a_{1n} b_{2n} \\
\vdots & \vdots & \ddots & \vdots & & & \vdots & \vdots & \ddots & \vdots \\
a_{11} b_{n1} & a_{11} b_{n2} & \cdots & a_{11} b_{nn} &
\cdots & \cdots & a_{1n} b_{p1} & a_{1n} b_{n2} & \cdots & a_{1n} b_{nn} \\
\vdots & \vdots & & \vdots & \ddots & & \vdots & \vdots & & \vdots \\
\vdots & \vdots & & \vdots & & \ddots & \vdots & \vdots & & \vdots \\
a_{n1} b_{11} & a_{n1} b_{12} & \cdots & a_{n1} b_{1n} &
\cdots & \cdots & a_{nn} b_{11} & a_{nn} b_{12} & \cdots & a_{nn} b_{1n} \\
a_{n1} b_{21} & a_{n1} b_{22} & \cdots & a_{n1} b_{2n} &
\cdots & \cdots & a_{nn} b_{21} & a_{nn} b_{22} & \cdots & a_{nn} b_{2n} \\
\vdots & \vdots & \ddots & \vdots & & & \vdots & \vdots & \ddots & \vdots \\
a_{n1} b_{n1} & a_{n1} b_{n2} & \cdots & a_{n1} b_{nn} &
\cdots & \cdots & a_{nn} b_{n1} & a_{nn} b_{n2} & \cdots & a_{nn} b_{nn}
\end{pmatrix}$$
we see that $a_{ij}\operatorname{vec}(B)^T$ is the transposed vectorization of the $n \times n$ submatrix at position $i, j$ in the $n^2 \times n^2$ matrix $A \otimes B$.
In other words, the rows of $\operatorname{vec}(A)\operatorname{vec}(B)^T$ are the transposed vectorizations of the $n \times n$ submatrices of $A \otimes B$ taken top to bottom, then left to right.
Especially, $\operatorname{vec}(A)\operatorname{vec}(B)^T$ and $A \otimes B$ have the same elements, but in different places, thus their Frobenius norm will be the same.
The norm $\|\operatorname{vec}(A)\operatorname{vec}(B)^T\|_\infty$ is the greatest absolute row sum of the matrix. The absolute row sum of a row is:
$$|a_{ij}| \sum_{k, l} |b_{kl}|.$$
So, the maximum $|a_{ij}|$ will give the norm. A similar argument can be done for $\|\operatorname{vec}(A)\operatorname{vec}(B)^T\|_1$, which is the greatest absolute column sum.
$\operatorname{vec}(A)\operatorname{vec}(B)^T$ and $A \otimes B$ will in general have different ranks. $\operatorname{vec}(A)\operatorname{vec}(B)^T$ will have rank one and $A \otimes B$ will have rank $\operatorname{rank}(A)\operatorname{rank}(B)$.
Since $\operatorname{vec}(A)\operatorname{vec}(B)^T$ has rank one, it has only one non-zero singular value, which will decide its spectral norm. Of course, we have a spectral value decomposition given by
$$\operatorname{vec}(A)\operatorname{vec}(B)^T = \|\operatorname{vec}(A)\| \|\operatorname{vec}(B)\| \frac{\operatorname{vec}(A)}{\|\operatorname{vec}(A)\| }\frac{\operatorname{vec}(B)^T}{\|\operatorname{vec}(B)\| },$$
i.e. the singular value, and the spectral norm, is $\|\operatorname{vec}(A)\| \|\operatorname{vec}(B)\|$. $\|\operatorname{vec}(A)\|$ is the same as the Frobenius norm of $A$, which is the square root of the sum of the squares of the singular values of $A$.
A vector should always be a column vector. If you want to talk about a "row vector", you should write it as the transpose of some vector, or as a matrix with just one row.
I will add that, technically, the scalar product can not be written as a matrix product the way you are doing it. If $u,v$ are vectors, then the matrix product $u^T v$ is a $1\times 1$-matrix, whereas the scalar product $u\cdot v$ is a scalar. It is common to ignore this and consider them the same, but one should sometimes be careful when using this, for example when one wants to multiply the scalar product with a matrix.
Added: You cannot say that $a$ is a vector and then write $a=(4,5,6)$ as a row matrix. You can however say that $a$ is a $1\times 3$-matrix $(4,5,6)$, or that $a$ is a vector and $a^T=(4,5,6)$ (and in the latter case, $a$ is a column vector). Let us suppose that you defined the matrices $a=(4,5,6)$ and $b=\begin{pmatrix}1\\2\\3\end{pmatrix}$. Now, the matrix product $a\cdot b$ is equal to the $1\times 1$-matrix $[32]$, not the number 32, which is the scalar product of the vectors $\begin{pmatrix}4\\5\\6\end{pmatrix}$ and $\begin{pmatrix}1\\2\\3\end{pmatrix}$. Likewise, the matrix product $b^T\cdot a^T$ is equal to the $1\times 1$-matrix $[32]$. Then we can conclude that $a\cdot b=b^T\cdot a^T$ (note that you made a typo in your post and wrote $a^T\cdot b^T$ for the right-hand-side). This is consistent with Wikipedia, which states that $(a\cdot b)^T=b^T\cdot a^T$. However, your typo is also correct in this particular case, namely $(a\cdot b)^T=a^T\cdot b^T$, but this is only because both matrices are $1\times 1$-matrices, so they are equal to their own transposes.
Added after the edit: You get the identity $a^T\cdot b=b^T\cdot a$. This equation is not true for general matrices $a,b$, so you will not find it on Wikipedia. What is generally true, however, is the identity $(a^T\cdot b)^T=b^T\cdot a$. The reason your identity is true is because both matrices are $1\times 1$-matrices, and the transpose of a $1\times 1$-matrix does not change the matrix.
Best Answer
Cleared my confusions reading this paper. So the point is that vec(.) operator stacks ALL the rows into one long column and not just a row. Thus, in the above case, it would create a 12x1 vector making it possible for the multiplication.