If $x_1,\dotsc,x_{n-1} \in \mathbb{R}^n$, one defines $x_1 \times \cdots \times x_{n-1} \in \mathbb{R}^n$ to be the unique vector such that
$$
\forall y \in \mathbb{R}^n, \quad \langle x_1 \times \cdots \times x_{n-1},y \rangle = \operatorname{det}(x_1,\dotsc,x_{n-1},y),
$$
where the determinant is being viewed as a function of the rows or columns of the usual matrix argument, i.e., as the unique antisymmetric $n$-form $\operatorname{det} : \mathbb{R}^n \times \cdots \times \mathbb{R}^n \to \mathbb{R}$ such that $\det(e_1,\dotsc,e_n) = 1$ for $\{e_k\}$ the standard ordered basis of $\mathbb{R}^n$.
Now, suppose that $x_1,\dotsc,x_{n-1} \in \mathbb{R}^n$ are linearly independent, and hence span a hyperplane $H$ ($n-1$-dimensional subspace) in $\mathbb{R}^n$. Then, in particular, $x_1 \times \cdots \times x_{n-1} \neq 0$ is orthogonal to each $x_k$, and hence defines a non-zero normal vector to $H$; write $$x_1 \times \cdots \times x_{n-1} = \|x_1 \times \cdots \times x_{n-1}\|\hat{n}$$ for $\hat{n}$ the corresponding unit normal. Let $y \notin H$. Then $x_1,\dotsc,x_{n-1},y$ are linearly independent and span an $n$-dimensional parallelopiped $P$ with $n$-dimensional volume
$$
|\operatorname{det}(x_1,\dotsc,x_{n-1},y)| = |\langle x_1 \times \cdots x_{n-1},y\rangle| = \|x_1 \times \cdots \times x_{n-1}\||\langle \hat{n},y\rangle|.
$$
Now, with respect to the decomposition $\mathbb{R}^n = H^\perp \oplus H$, let
$$
T = \begin{pmatrix} I_{H^\perp} & 0 \\ M & I_{H} \end{pmatrix}
$$
for $M : H^\perp \to H$ given by $$M(c \hat{n}) = -c \langle \hat{n},y \rangle^{-1} P_H y = -c\langle \hat{n},y\rangle^{-1}(y-\langle\hat{n},y\rangle\hat{n}),$$ where $P_H(v)$ denotes the orthogonal projection of $v$ onto $H$. Then $T(P)$ is a $n$-dimensional parallelepiped with with vertices $Tx_1 = x_1,\dotsc,Tx_{n-1}=x_{n-1}$, and
$$
Ty = \langle \hat{n},y \rangle \hat{n} = P_{H^\perp} y = y - P_H y,
$$
with the same volume as $P$. On the one hand, since $Ty = y - P_H y$ for $P_H y \in H = \{x_1 \times \cdots \times x_{n-1}\}^\perp$,
$$
\operatorname{Vol}_n(T(P)) = |\operatorname{det}(Tx_1,\dotsc,Tx_{n-1},Ty)|\\ = |\operatorname{det}(x_1,\dotsc,x_{n-1},y-P_H y)|\\ = |\operatorname{det}(x_1,\dotsc,x_{n-1},y)|\\ = \|x_1 \times \cdots \times x_{n-1}\||\langle \hat{n},y\rangle|.
$$
On the other hand, since $Ty \in H^\perp$, $T(P)$ is an honest cylinder with height $\|Ty\| = |\langle \hat{n},y\rangle|$ and base the $(n-1)$-dimensional parallelopiped $R$ spanned by $x_1,\dotsc,x_{n-1}$, so that
$$
\operatorname{Vol}_n(T(P)) = \operatorname{Vol}_{n-1}(R)|\langle \hat{n},y\rangle|.
$$
Thus,
$$
\operatorname{Vol}_{n-1}(R)|\langle \hat{n},y\rangle| = \operatorname{Vol}_n(T(P)) = \|x_1 \times \cdots \times x_{n-1}\||\langle \hat{n},y\rangle|,
$$
so that
$$
\operatorname{Vol}_{n-1}(R)| = \|x_1 \times \cdots \times x_{n-1}\|,
$$
as required.
EDIT: Theoretical Addendum
Let's see what $\phi x_1 \times \cdots \times \phi x_n$ is in terms of $x_1 \times \cdots \times x_{n-1}$ for $\phi$ a linear transformation on $\mathbb{R}^n$.
Define a linear map $T : (\mathbb{R}^n)^{\otimes(n-1)} \to (\mathbb{R}^n)^\ast$ by
$$
T : x_1 \otimes \cdots \otimes x_{n-1} \mapsto \operatorname{det}(x_1,\cdots,x_{n-1},\bullet),
$$
so that if $S : \mathbb{R}^n \to (\mathbb{R}^n)^\ast$ is the isomorphism $v \mapsto \langle v,\bullet \rangle$, then
$$
x_1 \times \cdots \times x_n = (S^{-1}T)(x_1 \otimes \cdots \otimes x_n).
$$
Now, since the determinant is antisymmetric, so too is $T$, and hence $T$ descends to a linear map $T : \bigwedge^{n-1} \mathbb{R}^n \to (\mathbb{R}^n)^\ast$,
$$
x_1 \wedge \cdots \wedge x_{n-1} \mapsto \operatorname{det}(x_1,\cdots,x_{n-1},\bullet);
$$
indeed, if $\operatorname{Vol} = e_1 \wedge \cdots \wedge e_n$ for $\{e_k\}$ the standard ordered basis for $\mathbb{R}^n$, then for any $y \in \mathbb{R}^n$,
$$
\langle x_1 \otimes \cdots \otimes x_{n-1},y \rangle \operatorname{Vol} = \operatorname{det}(x_1,\cdots,x_{n-1},y)\operatorname{Vol} = x_1 \wedge \cdots \wedge x_{n-1} \wedge y,
$$
which, in fact, shows that
$$
x_1 \times \cdots \times x_{n-1} = \ast (x_1 \wedge \cdots \wedge x_{n-1}),
$$
where $\ast : \wedge^{n-1} \mathbb{R}^n \to \mathbb{R}^n$ is the relevant Hodge $\ast$-operator. Thus, a cross product is really an $(n-1)$-form in the orientation-dependent disguise given by the Hodge $\ast$-operator; in particular, it will really transform as an $(n-1)$-form, as we'll see now.
Now, let $\phi : \mathbb{R}^n \to \mathbb{R}^n$ be linear. Observe that the adjugate matrix $\operatorname{Adj}(\phi)$ of $\phi$ can be invariantly defined as the unique linear transformation $\operatorname{Adj}(\phi) : \mathbb{R}^n \to \mathbb{R}^n$ such that for any $\omega \in \bigwedge^{n-1} \mathbb{R}^n$ and $y \in \mathbb{R}^n$,
$$
(\wedge^{n-1})\omega \wedge y = \omega \wedge \operatorname{Adj}(\phi) y,
$$
e.g., in our case,
$$
x_1 \wedge \cdots \wedge x_{n-1} \wedge \operatorname{Adj}(\phi) y = (\wedge^{n-1}\phi)(x_1 \wedge \cdots \wedge x_{n-1}) \wedge y = \phi x_1 \wedge \cdots \wedge \phi x_{n-1} \wedge y,
$$
and that, as a matrix, $\operatorname{Adj}(\phi) = \operatorname{Cof}(\phi)^T$, where $\operatorname{Cof}(\phi)$ denotes the cofactor matrix of $\phi$. Then for any $y$,
$$
\langle \phi x_1 \times \cdots \times \phi x_{n-1},y \rangle \operatorname{Vol} = \operatorname{det}(\phi x_1,\cdots,\phi x_{n-1},y)\operatorname{Vol}\\ = \phi x_1 \wedge \cdots \wedge \phi x_{n-1} \wedge y\\ = (\wedge^{n-1}\phi)(x_1 \wedge \cdots \wedge x_{n-1}) \wedge y\\ = (x_1 \wedge \cdots \wedge x_{n-1}) \wedge \operatorname{Adj}(\phi)y\\ = \langle x_1 \times \cdots \times x_{n-1},\operatorname{Adj}(\phi)y \rangle \operatorname{Vol}\\ = \langle \operatorname{Cof}(\phi)(x_1 \times \cdots \times x_{n-1}),y \rangle \operatorname{Vol},
$$
and hence, since $y$ was arbitrary,
$$
\phi x_1 \times \cdots \times \phi x_{n-1} = \operatorname{Cof}(\phi)(x_1 \times \cdots \times x_{n-1}) = (\ast \circ \wedge^{n-1}\phi \circ \ast^{-1})(x_1 \times \cdots \times x_{n-1}),
$$
in terms of the Hodge $\ast$-operation and the invariantly defined $\wedge^{n-1}\phi$.
The dot product is the special case of a more general concept, the inner product. If you have a vector space $ V $ over the reals or the complex numbers, then an inner product is a map $ f : V \times V \to \mathbb{C} $ or $ f : V \times V \to \mathbb{R} $ which is conjugate symmetric, positive definite, and linear in its first argument. We usually write $ f(u, v) = \langle u, v \rangle $, in which case these properties can be summed up as follows:
- Conjugate symmetry: $ \overline{\langle u, v \rangle} = \langle v, u \rangle $, where $ \bar{z} $ denotes complex conjugation. Note that this implies $ \langle u, u \rangle $ is always real for any vector $ u $.
- Positive definiteness: $ \langle v, v \rangle \geq 0 $ for any $ v \in V $, with equality holding iff $ v = 0 $.
- Linearity in the first argument: $ \langle \alpha u + \beta v, w \rangle = \alpha \langle u, w \rangle + \beta \langle v, w \rangle $ where $ u, v, w \in V $ and $ \alpha, \beta $ are in the field of scalars.
If $ V = \mathbb{R}^n $, then we can fix a basis $ B = \{ b_i \in \mathbb{R}, 1 \leq i \leq n \} $ and define $ \langle b_i, b_i \rangle = 1 $ and $ \langle b_i, b_j \rangle = 0 $ for $ i \neq j $. Extending this to all of $ \mathbb{R}^n $ by linearity gives us
$$ \left \langle \sum_{k=1}^{n} c_k b_k, \sum_{j=1}^{n} d_j b_j \right \rangle = \sum_{1 \leq k, j \leq n} d_k c_j \langle b_i, b_j \rangle = \sum_{i=1}^{n} c_i d_i $$
where positive definiteness is readily verified. You will recognize this expression as the definition of the dot product. Indeed, if we take our basis $ B $ to be the standard basis of $ \mathbb{R}^n $, then this inner product is the dot product.
Why is this formalism more powerful? A result about the inner product is the Cauchy-Schwarz inequality, which says that $ |\langle u, v \rangle| \leq |u| |v| $ where $ |u| = \sqrt{\langle u, u \rangle} $. This tells us that
$$ -1 \leq \frac{\langle u, v \rangle}{|u| |v|} \leq 1 $$
assuming that our field of scalars is $ \mathbb{R} $. We then see that the arccosine of this expression is well-defined, so we can define the angle between nonzero vectors $ u $ and $ v $ as
$$ \theta = \arccos \left( \frac{\langle u, v \rangle}{|u| |v|} \right) $$
The properties we expect to be true are then easily verified. This notion extends to infinite dimensional vector spaces over $ \mathbb{R} $, where defining angle is not at all obvious. It is then trivially true that we have $ \langle u, v \rangle = |u| |v| \cos(\theta) $, since that is how $ \theta $ was defined.
The cross product is an entirely separate concept which allows us to find a vector orthogonal to two given vectors in $ \mathbb{R}^3 $. In addition, its magnitude also gives the area of the parallelogram spanned by the vectors. These properties can be taken as the definition of the cross product (with appropriate care for orientation), or they can be derived as theorems starting from the algebraic definition.
Best Answer
Note $\det(u_1\cdots u_k)$ doesn't make sense unless $(u_1\cdots u_k)$ is a square matrix, i.e. $k=n$.
(I am treating vectors as column vectors in $\mathbb{R}^n$.)
The inner product in $\Lambda^k\mathbb{R}^n$ satisfies
$$ \langle u_1\wedge\cdots\wedge u_k,v_1\wedge\cdots\wedge v_k\rangle=\det [u_i\cdot v_j] $$
That is, the $ij$ entry (of the matrix we take the determinant of) is the dot product of $u_i$ and $v_j$.
In particular the norm is given by the so-called Gramian determinant:
$$ \|u_1\wedge\cdots u_k\|^2=\det[u_i\cdot u_j] $$
If we write $U=(u_1\cdots u_k)$, not necessarily a square matrix, then this is $\det(U^TU)$.
When $U$ is a square matrix, this simplifies to $\|u_1\wedge\cdots\wedge u_k\|=\det U$, yes.