The dot product is the special case of a more general concept, the inner product. If you have a vector space $ V $ over the reals or the complex numbers, then an inner product is a map $ f : V \times V \to \mathbb{C} $ or $ f : V \times V \to \mathbb{R} $ which is conjugate symmetric, positive definite, and linear in its first argument. We usually write $ f(u, v) = \langle u, v \rangle $, in which case these properties can be summed up as follows:
- Conjugate symmetry: $ \overline{\langle u, v \rangle} = \langle v, u \rangle $, where $ \bar{z} $ denotes complex conjugation. Note that this implies $ \langle u, u \rangle $ is always real for any vector $ u $.
- Positive definiteness: $ \langle v, v \rangle \geq 0 $ for any $ v \in V $, with equality holding iff $ v = 0 $.
- Linearity in the first argument: $ \langle \alpha u + \beta v, w \rangle = \alpha \langle u, w \rangle + \beta \langle v, w \rangle $ where $ u, v, w \in V $ and $ \alpha, \beta $ are in the field of scalars.
If $ V = \mathbb{R}^n $, then we can fix a basis $ B = \{ b_i \in \mathbb{R}, 1 \leq i \leq n \} $ and define $ \langle b_i, b_i \rangle = 1 $ and $ \langle b_i, b_j \rangle = 0 $ for $ i \neq j $. Extending this to all of $ \mathbb{R}^n $ by linearity gives us
$$ \left \langle \sum_{k=1}^{n} c_k b_k, \sum_{j=1}^{n} d_j b_j \right \rangle = \sum_{1 \leq k, j \leq n} d_k c_j \langle b_i, b_j \rangle = \sum_{i=1}^{n} c_i d_i $$
where positive definiteness is readily verified. You will recognize this expression as the definition of the dot product. Indeed, if we take our basis $ B $ to be the standard basis of $ \mathbb{R}^n $, then this inner product is the dot product.
Why is this formalism more powerful? A result about the inner product is the Cauchy-Schwarz inequality, which says that $ |\langle u, v \rangle| \leq |u| |v| $ where $ |u| = \sqrt{\langle u, u \rangle} $. This tells us that
$$ -1 \leq \frac{\langle u, v \rangle}{|u| |v|} \leq 1 $$
assuming that our field of scalars is $ \mathbb{R} $. We then see that the arccosine of this expression is well-defined, so we can define the angle between nonzero vectors $ u $ and $ v $ as
$$ \theta = \arccos \left( \frac{\langle u, v \rangle}{|u| |v|} \right) $$
The properties we expect to be true are then easily verified. This notion extends to infinite dimensional vector spaces over $ \mathbb{R} $, where defining angle is not at all obvious. It is then trivially true that we have $ \langle u, v \rangle = |u| |v| \cos(\theta) $, since that is how $ \theta $ was defined.
The cross product is an entirely separate concept which allows us to find a vector orthogonal to two given vectors in $ \mathbb{R}^3 $. In addition, its magnitude also gives the area of the parallelogram spanned by the vectors. These properties can be taken as the definition of the cross product (with appropriate care for orientation), or they can be derived as theorems starting from the algebraic definition.
Sometimes, we just say that a $1\times 1$ matrix is the same as a scalar. Afterall, when it comes to addition and multiplication of $1\times 1$ matrices vs addition and multiplication of scalars, the only difference between something like $\begin{bmatrix}3\end{bmatrix}$ and $3$ is some brackets. Consider $$(3+5)\cdot 4 = 32 \\ (\begin{bmatrix} 3\end{bmatrix} + \begin{bmatrix} 5\end{bmatrix})\begin{bmatrix} 4\end{bmatrix} = \begin{bmatrix} 32\end{bmatrix}$$ The algebra works out exactly the same. So sometimes it's not ridiculous to think of $1\times 1$ matrices as just another way of writing scalars.
But if you do want to distinguish the two, then just think of the formula $a\cdot b = a^Tb$ as a way of finding out which scalar you get from the dot product of $a$ and $b$ and not literally the dot product value itself (which should be scalar). That is, we calculate the dot product of $\begin{bmatrix} 1 \\ 2\end{bmatrix}$ and $\begin{bmatrix} 3 \\ 4\end{bmatrix}$ by using the formula $$\begin{bmatrix} 1 \\ 2\end{bmatrix}^T\begin{bmatrix} 3 \\ 4\end{bmatrix} = \begin{bmatrix} 1 & 2\end{bmatrix}\begin{bmatrix} 3 \\ 4\end{bmatrix} = \begin{bmatrix} 11\end{bmatrix}$$ and then say that this tells us that the dot product is really $11$. So the formula $a^Tb$ is just an algorithm we use to find the correct scalar.
You can view it either way. It doesn't really make a difference.
Best Answer
Dot-products and cross-products are products between two like things, that is: a vector, and another vector. In a matrix-vector product, the matrix and vectors are two very different things. So, a matrix-vector product cannot rightly be called either a dot-product or a cross-product.
That being said, the matrix-vector product is closely related to the dot product. In particular: suppose that $A$ is a matrix with row-vectors $A_1,\dots,A_n$, and $b$ is a column vector. Then the product $Ab$ will be the column vector with entries $(A_1 \cdot b,\cdots,A_n \cdot b)$.
Moreover: given two column vectors $u$ and $v$, their dot-product is the same as the matrix product $u^Tv$, where $T$ here means the transpose. In this sense, we might consider the dot-product to be a kind of matrix product, but the reverse is not generally true.