[Math] How to efficiently compute the generalized cross product

geometrylinear algebramatrices

It's possible to extend the well known cross product between two vectors in $\mathbb{R}^3$ to $n-1$ vectors in $\mathbb{R}^n$.

Let $\vec{v_1}, \vec{v_2}, \dots, \vec{v}_{n-1} \in \mathbb{R}^n$ and $\vec{e}_1 = (1, 0, 0, \dots, 0)^T, \vec{e}_2 = (0, 1, 0, \dots, 0)^T, \dots, \vec{e}_n = (0, 0, \dots, 0, 1)^T$ be the unit vectors of the standard basis in $\mathbb{R}^n$. Then we can define the a "cross product" in $\mathbb{R}^n$ by the following determinant:

$\vec{v}_1 \times \vec{v}_2 \times \dots \times \vec{v}_{n-1} = det\
\begin{pmatrix}
v_{1,1} & v_{2,1} & \cdots & v_{n-1,1} & \vec{e}_1\cr
v_{1,2} & v_{2,2} & \cdots & v_{n-1,2} & \vec{e}_2\cr
\vdots & \vdots & \ddots & \vdots & \vdots\cr
v_{1,n} & v_{2,n} & \cdots & v_{n-1,n} & \vec{e}_n
\end{pmatrix}$

In theory it's easy to compute the determinant by cofactor expansion along the last column, but i'm wondering how one would do this in practice. Computing the $n$ minors on their own seems like a lot of overhead to me and i guess it's not very stable.

Edit: As turned out later, the following system of equations is wrong! Furthermore the approach is not used to compute the cross product but an orthogonal vector with arbitrary length.

The only approach i've found on this topic is in the code of qhull. But unfortunately it's quite obfuscated and there aren't any helpful comments. I figured out that there the problem is adressed by solving the following system of equations:

$A * \vec{x} = \vec{b}$

with

$A = \begin{pmatrix} v_{1,1} & v_{1,2} & \dots & v_{1,n}\cr v_{2,1} & v_{2,2} & \dots & v_{2,n}\cr\vdots & \vdots & \ddots & \vdots\cr v_{n-1,1} & v_{n-1,2} & \dots & v_{n-1,n}\cr 0 &0& \cdots & 1 \end{pmatrix}$

$\vec{b} = \begin{pmatrix}0\cr0\cr\vdots\cr 0 \cr sgn(det(A)) \end{pmatrix}$

and

$sgn(x) := \begin{cases} +1 & x \geq 0 \cr -1 & x < 0 \end{cases}$

It's clear that the first $n-1$ rows are enforcing $\vec{x}$ to be orthogonal to $\vec{v}_1, \dots, \vec{v}_{n-1}$ but i can't get the origin or meaning of the last row. Furthermore i guess this approach is problematic if $det(A) = 0$ (e.g. if vectors are parallel to a standard unit vector).

So my questions are:

  1. How is the last row deduced and what is its meaning?
  2. Are there other approaches that are faster and/or more stable?

Best Answer

Regarding your question 2: the approach I'd take is computing the first determinant from a RQ factorization of the leading $(n-1)\times(n-1)$ matrix, and then each other by replacing in turn each row of $R$ with $(last row)Q$ and re-orthogonalizing manually with $O(n)$ Givens transformations on the left. In this way you pay $O(n^3)$ for the first determinant and then $O(n^2)$ (instead of the usual $O(n^3)$) for each subsequent one.

Normwise stability should be ensured since we are only making orthogonal transformations.