[Math] Geometric interpretation of determinant

determinantgeometry

I am trying to prove geometrically, without invoking the dot or cross products or orthogonality, that the volume of a parallelepiped formed by vectors $
\begin{bmatrix}
a_1 \\
a_2 \\
a_3
\end{bmatrix}$, $\begin{bmatrix}
b_1 \\
b_2 \\
b_3
\end{bmatrix}$ and $\begin{bmatrix}
c_1 \\
c_2 \\
c_3
\end{bmatrix}$
is $\det P = \begin{vmatrix}
a_1 & b_1 & c_1 \\
a_2 & b_2 & c_2 \\
a_3 & b_3 & c_3
\end{vmatrix}$.

One of the options I see is to argue that any parallelepiped can be created by multiplying the matrix of a cube whose volume is given by
$\det C= \begin{vmatrix}
a_1 & 0 & 0 \\
0 & b_2 & 0 \\
0 & 0 & c_3
\end{vmatrix}$ by elementary matrices so $P=CE_1 E_2 …E_i $. The absolute value of the \det erminants of the transformations which don't change the volume of the cube is one (exchanging columns) or equal to the value by which we have to multiply the volume of the cube in order to get the volume of the parallelepiped (multiplying a column by a constant or adding a multiple of a column to another is equivalent to elongating a vector by that multiple). So the volume of resulting figure is $\det C\det E_1…\det E_i=\det P$.

1.) Is this argument correct?

2.) Is it possible to prove it geometrically? By this I mean similarly to the 2D case here.

3.) Wikipedia says it is possible to find the volume using Cramer's rule on some 2D matrices, how is this done?

Thanks for any advice!

Best Answer

Your argument is correct.

With a little effort (but not much), you should be able to modify your argument to make it more geometric.

Here is the the two dimensional version:

If we have a parallelogram spanned by the vectors $\vec{a}$, $\vec{b}$, then the area of the parallelogram is the same as the area of the rectangle spanned by $\vec{a}, \vec{b}',$ where $\vec{b}'$ is the component of $\vec{b}$ that is orthogonal to $\vec{a}$, i.e. $$\vec{b}' = \vec{b} - \dfrac{(\vec{a} \cdot \vec{b})}{(\vec{a} \cdot \vec{a})} \vec{a}.$$

Now the process of going from the matrix $\Bigl( \, \vec{a} \quad \vec{b} \, \Bigr) $ to the matrix $\Bigl( \, \vec{a} \quad \vec{b}' \, \Bigr)$ involves right-multiplying by the matrix $\begin{pmatrix} 1 & - (\vec{a} \cdot \vec{b})/ (\vec{a}\cdot\vec{a}) \\ 0 & 1\end{pmatrix}$, whose determinant is $1$.

Thus we see that $\Big( \, \vec{a} \quad \vec{b} \, \Bigr)$ and $\Bigl( \, \vec{a} \quad \vec{b}' \, \Bigr)$ have the same determinant, and also describe parallelograms with the same area, the latter being a rectangle.

This reduces you to checking the relationship between area and determinant in the case of a rectangle. Rotating this rectangle, you can make its edges parallel to the coordinate axes. Again, a rotation matrix has determinant one, so you are reduced to checking the relationship between determinants and areas in the case of a rectangle whose sides are parallel to the coordinate axes. this is pretty obvious, and so we are done.


In the three dimensional case, you can argue similarly: you first of all reduce to the case where one face is a rectangle, and you then reduce to the case when the third side is perpendicular to the rectangular face, so that the whole thing is a cuboid. These steps involve right multiplying by matrices which are upper triangular with $1$'s down the diagonal, which thus have det $= 1$.

Now applying a bunch of rotations (again, each has det $= 1$), you can make your cuboid have sides parallel to the coordinate axes, at which point the formula is again pretty obvious.


Note that if we allow ourselves one more step --- namely, multiplying by a diagonal matrix (which, geometrically, is a rescaling of each of the coordinate axes) --- then we can start with any (non-degenerate) parallelepiped and convert it to the standard cuboid with unit length sides sitting at the origin.

In linear algebra terms, this can be restated as the fact that any matrix with non-zero determinant can be written as the product of a diagonal matrix, some rotation matrices, and an upper triangular matrix with $1$'s down the diagonal. Combining the diagonal and upper triangular matrix with $1$'s down the diagonal, we obtain an upper triangular matrix with non-zero entries down the diagonal.

In other words, any matrix with non-zero determinant can be written as a product of some rotation matrices with an upper triangular matrix. This is usually called the $QR$ decomposition in linear algebra textbooks; in more theoretical treatments it is called the Iwasawa decomposition.

So what I have just given is a geometric description of the $QR$ decomposition.


The difference between what I've described and your argument is that you use elementary matrices, while I use just upper triangular matrices with $1$'s down the diagonal. These arise from the geometric process of projecting one vector to make it perpendicular to another, which is where the geometric perspective is coming from.

Related Question