Flaw in a proof of $\det AB=\det A\det B$

determinantlinear algebramatricesproof-verification

Since the elementary row operations, namely row exchanging, multiplying a scalar to a row, and subtracting a row from another row, doesn't affect to the result of the determinant, we only consider the upper triangular matrices.

And for an upper triangular matrix $A$, $\det A$ is just a product of its diagonal entries.

And if we multiply two upper triangular matrices, $A,B$, we have

$$AB=\left[\begin{array}{}
a_{11}&\dots&\dots&\dots\\
0&a_{22}&\dots&\dots\\
0&0&\ddots& \vdots&\\
0&0&\dots&a_{nn}
\end{array}\right]\left[\begin{array}{}
b_{11}&\dots&\dots&\dots\\
0&b_{22}&\dots&\dots\\
0&0&\ddots& \vdots&\\
0&0&\dots&b_{nn}
\end{array}\right]\\
=\left[\begin{array}{}
a_{11}b_{11}&\dots&\dots&\dots\\
0&a_{22}b_{22}&\dots&\dots\\
0&0&\ddots& \vdots&\\
0&0&\dots&a_{nn}b_{nn}
\end{array}\right].$$

So $\det AB=\det A \det B.$

I feel this should prove the equality. Any flaw in this reasoning?

EDIT: In fact, multiplying scalar to a row does affect to the result.

May approaching this direction a dead end?

Best Answer

Given $A$ and $B$ we can find products of elementary matrices $U_1$ and $U_2$ with determinant $1$ such that $A'=U_1A$ and $B'=U_2B$ are upper triangular. As you observed, we have

  1. $\det A=\det A'$,
  2. $\det B=\det B'$, and
  3. $\det(A'B')=\det(A')\det(B')$.

The last by the simple way the diagonal elements behave when multiplying upper triangular matrices.

But why would we have $\det(AB)=\det(A'B')$? A problem appearing at this point is that $$ A'B'=U_1AU_2B\qquad(*) $$ is not gotten from $AB$ by a sequence of elementary row operations. In other words, we don't have a product like $U_3AB$ on the right hand side of $(*)$. So it is not obvious that $\det(AB)$ would be equal to $\det(A'B')$?

Yet in other words.

Applying those row operations to the factors $A$ and $B$ does make them upper triangular, but that process disturbs their product.


Edit: Adding an explanation as to why we only need elementary row operations of the type add a scalar multiple of a row to another. This was commented by eyeballfrog, but is not usually covered in linear algebra texts because it would be very kludgy to do e.g. Gaussian elimination this way.

Let $d$ be a non-zero scalar. Consider the following sequence of operations (of this type). Only showing two rows for that suffices to prove the point. $$ \begin{aligned} &\left(\begin{array}{cc}1&0\\0&1\end{array}\right)&\to&\left(\begin{array}{cc}1&-d\\0&1\end{array}\right) &\to&\left(\begin{array}{cc}1&-d\\d^{-1}&0\end{array}\right)\\ \to&\left(\begin{array}{cc}0&-d\\d^{-1}&0\end{array}\right)&\to&\left(\begin{array}{cc}0&-d\\d^{-1}&d\end{array}\right) &\to&\left(\begin{array}{cc}d^{-1}&0\\d^{-1}&d\end{array}\right)\\ \to&\left(\begin{array}{cc}d^{-1}&0\\0&d\end{array}\right). \end{aligned} $$

  • The last form shows that a sequence of row operations of this type multiplies one row by $d$ and another by $d^{-1}$. This is the best we can do when multiplyings rows, because now we are constrained to $\det=1$ operations.
  • Set $d=1$ and look at the first matrix on the second line (= the fourth matrix altogether). It has the form that of the elementary matrix interchanging two rows, while also multiplying the other by $-1$. Again, the latter side effect is necessary to keep $\det=1$.
Related Question