[Math] Cross product of vectors as a determinant: valid matrix operation

cross productdeterminantfield-theorymatricesvector-spaces

"The definition of the cross product can also be represented by the
determinant of a formal matrix."
Wikipedia

This seems like a hack to me—something of much practical use but not mathematically exact. Unless I am mistaken,

  1. It only works for vectors in 3-space
  2. A matrix with mixed scalars and vectors is not strictly legal

Again, Wikipedia says,

"Matrices can be considered with much more general types of entries
than real or complex numbers. As a first step of generalization, any
field, i.e., a set where addition, subtraction, multiplication and
division operations are defined and well-behaved, may be used instead
of $\mathbb{R}$ or $\mathbb{C}$, for example rational numbers or
finite fields."

Are there fields can consist of a mix of vectors and scalars? Is there a more generalized form of vector cross products that would demonstrate why the determinant solution for 3-space has pragmatic value? Is this indeed a hack?

Best Answer

I think the best way to answer your question is, it's a mnemonic.

This mnemonic lets you get your hands on a collection of mathematical objects called "exterior forms". From this perspective, it's not a "hack" but to explain exactly what it is essentially requires discussing dual spaces and things not appropriate for standard multivariable calculus classes. [Edit: The Mathoverflow post cited in the comments above is a great discussion on how one might try to give this a formal footing.]

Here's a way to view why/how the cross product works. Let $e_1=\langle 1,0,\dots,0 \rangle$, $e_2=\langle 0,1,\dots,0 \rangle$, $\dots$, $e_n = \langle 0,\dots,0,1\rangle$ [so in $\mathbb{R}^3$ we have $e_1=\vec{i}$, $e_2=\vec{j}$, and $e_3=\vec{k}$.]. Next, consider vectors $\vec{a}_1 =\langle a_{11},a_{12},\dots,a_{1n} \rangle$, $\vec{a}_2 =\langle a_{21},a_{22},\dots,a_{2n} \rangle$, $\dots$, $\vec{a}_{n-1} =\langle a_{(n-1)1},a_{(n-1)2},\dots,a_{(n-1)n} \rangle$. Consider the $n \times n$ "matrix"

$$ A = \begin{bmatrix} e_1 & e_2 & \cdots & e_n \\ a_{11} & a_{12} & \cdots & a_{1n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{(n-1)1} & a_{(n-1)2} & \cdots & a_{(n-1)n} \end{bmatrix} $$

The determinant of $A$ is a vector (the vectors $e_1$, $e_2$, $\dots$ will only be multiplied by subdeterminants made up entirely of scalars so we never need to worry about multiplying vectors). Also, by the way the dot product is defined, $\mathrm{det}(A) \cdot \vec{b}$ just results in replacing $e_1$, $e_2$, etc. with the components of $\vec{b}$ (this is easily seen from the cofactor expansion of the determinant along the first row). That is...

$$ \mathrm{det}(A) \cdot \vec{b} = \mathrm{det}\left( \begin{bmatrix} b_1 & b_2 & \cdots & b_n \\ a_{11} & a_{12} & \cdots & a_{1n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{(n-1)1} & a_{(n-1)2} & \cdots & a_{(n-1)n} \end{bmatrix} \right) $$

Now consider that a determinant of a matrix is zero if it has a repeated row. So if we dot this "cross product vector" $\mathrm{det}(A)$ with any $\vec{a}_i$, we'll get zero. Thus $\mathrm{det}(A)$ is orthogonal to $\vec{a}_1$, $\dots$, $\vec{a}_{n-1}$. Hence we have a "cross product" for $\mathbb{R}^n$.

Not a hack. Just a clean way to rig up a function which takes in $n-1$ vectors and spits out a vector perpendicular to all of its inputs.

Edit: Some people say there is no cross product except in $\mathbb{R}^3$. This is true in a certain sense. If the purpose of the cross product is to give a vector perpendicular to two other vectors, then this requires 2 dimensions of inputs determine a 1 dimensional output so we need to be working in $2+1=3$ dimensional space. (There is also a binary cross product in $\mathbb{R}^7$, but that's a long story.) However, if you don't require your "cross product" to be a binary product, it'll work in $\mathbb{R}^n$ ($n \geq 2$).