I know that you can find the determinant of a matrix by either row reducing so that it is upper triangular and then multiplying the diagonal entries, or by expanding by cofactors. But could I reduce the matrix halfway (not entirely reduced to the point where it is in upper triangular) and then do cofactor expansion? Would that give me the same determinant?
[Math] Finding determinants using both reduction and cofactor expansion
determinantlinear algebra
Related Solutions
Of course this theorem has a geometric interpretation! In a sense, it's a multidimensional analogue of «the volume of a parallelepiped is the product of the area of its base and its height».
3. Let's start with $3\times3$ case: $$ \left|\begin{matrix}u_1&u_2&u_3\\v_1&v_2&v_3\\w_1&w_2&w_3\end{matrix}\right|= u_1\left|\begin{matrix}v_2&v_3\\w_2&w_3\end{matrix}\right| -u_2\left|\begin{matrix}v_1&v_3\\w_1&w_3\end{matrix}\right| +u_3\left|\begin{matrix}v_1&v_2\\w_1&w_2\end{matrix}\right|. $$ LHS is the volume of the parallelepiped spanned by three vectors, $u$, $v$ and $w$. What's the meaning of RHS? Clearly that's a scalar product of $u$ with something — namely, with the vector $$ \left(\left|\begin{matrix}v_2&v_3\\w_2&w_3\end{matrix}\right|, -\left|\begin{matrix}v_1&v_3\\w_1&w_3\end{matrix}\right|,\left|\begin{matrix}v_1&v_2\\w_1&w_2\end{matrix}\right|\right)= \left|\begin{matrix}\overrightarrow{e_1}&\overrightarrow{e_2}&\overrightarrow{e_3}\\v_1&v_2&v_3\\w_1&w_2&w_3\end{matrix}\right| $$ — i.e. with vector product of $v$ and $w$.
So the formula we get is $vol\langle u,v,w\rangle=(u,[v,w])$; now by the (geometrical) definition of scalar product it's $area\langle v,w\rangle\cdot (|u|\cdot\sin\phi)$, and the first factor is the area of the base and the second one is the height of our parallelepiped.
n. Consider the (general) case of vectors in $n$-dimensional space $V$. In RHS of the theorem we again see a scalar product of the first vector, $v$, with a vector $B$ (in coordinate-free language it really lives in $\Lambda^{n-1}V$, but let's ignore this for now) with coordinates $C_{1i}$.
The question is, what is the geometric meaning of $B$. Let me give 3 (closely related) answers.
- By the very same cofactor theorem it measures the [(n-1)-dimensional] area of projection of the base of our $n$-parallelepiped (i.e. $(n-1)$-parallelepiped spanned by all vectors but $v$) on different hyperplanes; more precisely, the area of the projection on the hyperplanes orthogonal to a unit vector $v$ is the scalar product $(B,v)$.
- Let's prove the cofactor theorem instead of using it. The function $(B,x)$ is linear in $x$. For a basis vector $x=e_i$ we have $(B,x)=C_{1i}$, which (up to sign, at least) is the area of the span of projections of our vectors on the hyperplane orthogonal to $e_i$. So $(B,x)$ is indeed the area of the projection of the base on the hyperplane orthogonal to $x$ (multiplied by $|x|$ and taken with appropriate signs).
- Even better, since everything is invariant under (special) orthogonal transforms, let's change basis to make $v$ a scalar multiple of $e_1$. Now the statement «$(B,v)$ is the $|v|$ times the area of the projection» became obvious (we literally multiply $|v|$ by the cofactor manifestly equal to this area — well, it was discussed in (2) anyway).
Now I must admit the statement we get is more like «the volume of a parallelepiped $\langle u,\text{base}\rangle$ is the product of the length of $u$ and the area of the projection of its base on the hyperplane orthogonal to $u$» — but it's of course equivalent to «the volume of a parallelepiped is the product of the area of its base and its height».
Zeros are a good thing, as they mean there is no contribution from the cofactor there.
$$ \det A = 1 \cdot (-1)^{1 + 1} \det S_{11} + 2 \cdot (-1)^{1+2} \det S_{12} + 0 \cdot \dotsb + 0 \cdot \dotsb $$ with $$ S_{11} = \begin{pmatrix} \times & \times & \times & \times \\ \times & 4 & 0 & 0 \\ \times & 0 & 5 & 6 \\ \times & 0 & 7 & 8 \end{pmatrix} = \begin{pmatrix} 4 & 0 & 0 \\ 0 & 5 & 6 \\ 0 & 7 & 8 \end{pmatrix} \\ S_{12} = \begin{pmatrix} \times & \times & \times & \times \\ 3 & \times & 0 & 0 \\ 0 & \times & 5 & 6 \\ 0 & \times & 7 & 8 \end{pmatrix} = \begin{pmatrix} 3 & 0 & 0 \\ 0 & 5 & 6 \\ 0 & 7 & 8 \end{pmatrix} $$ where $S_{ij}$ is the matrix $A$ with row $i$ and column $j$ removed.
The determinants of $S_{11}$ and $S_{12}$ are then calculated again by expansion along the first row, e.g. $$ \det S_{11} = 4 \cdot (-1)^{1+1} \det \begin{pmatrix} \times & \times & \times \\ \times & 5 & 6 \\ \times & 7 & 8 \end{pmatrix} = 4 \det \begin{pmatrix} 5 & 6 \\ 7 & 8 \end{pmatrix} \\ \det S_{12} = 3 \cdot (-1)^{1+1} \det \begin{pmatrix} \times & \times & \times \\ \times & 5 & 6 \\ \times & 7 & 8 \end{pmatrix} = 3 \det \begin{pmatrix} 5 & 6 \\ 7 & 8 \end{pmatrix} $$ until one hits a $2\times2$ matrix where one knows the direct formula.
Best Answer
Yes, provided you keep track of the changes to the determinant. Any combination of row reductions and cofactor expansions can be used. For example $$\begin{vmatrix}5 & 2 & 3 \\ 12 & 4 & 6 \\ 3 & 4 & 7\end{vmatrix} = 2\begin{vmatrix}5 & 2 & 3 \\ 6 &2 & 3 \\ 3 & 4 & 7\end{vmatrix} = 2\begin{vmatrix}5 & 2 & 3 \\ 1 & 0 & 0 \\ 3 & 4 & 7\end{vmatrix}$$ Where we have first factored out a $2$ from row $2$ and then subtracted row $1$ from row $2$. Now we expand along row $2$ to get $$2\begin{vmatrix}5 & 2 & 3 \\ 1 &0 & 0 \\ 3 & 4 & 7\end{vmatrix} = 2(-1)^{2+1}\begin{vmatrix} 2 & 3 \\ 4 & 7\end{vmatrix} = -2\begin{vmatrix} 2 & 3 \\ 0 & 1\end{vmatrix}$$ where in the last step we subtract twice row $1$ from row $2$. Now we simply multiply the diagonal entries to get determinant equal to $-4$.