Understanding matrix of cofactors

determinant

Suppose we're working within the vector space of $2 \times 2$ matrices over $\mathbb{R}$. For any matrix,
$$A = \begin{pmatrix} a & b \\ c & d \end{pmatrix},$$
it should be the case that, irrespective of whether or not $A$ is invertible, there exists a matrix $B$ such that
$$AB = BA = \det A \cdot I,$$
and $B$ is the matrix of cofactors.

In this lecture I am watching, the professor argues that the matrix of cofactors of the above matrix $A$ is given by
$$\begin{pmatrix} d & -b \\ -c & a\end{pmatrix}.$$
Though this should be correct because the result gives $ad – bc$ along the main diagonal and zeroes everywhere else, which is a scalar multiple, $\det A$, of $I$, I am for some reason not able to rederive the result. If I take $A$ and compute the matrix of minors, I get:
\begin{align*}
\text{minor}(1,1) & = \det(d) = d \\
\text{minor}(1,2) & = \det(c) = c \\
\text{minor}(2,1) & = \det(b) = b \\
\text{minor}(2,2) & = \det(a).= a.
\end{align*}

The matrix of cofactors is, hence,
$$\begin{pmatrix} d & -c \\ -b & a \end{pmatrix}.$$
This doesn't match, and I cannot figure out what I have done wrong, even though I am sure it is probably rather obvious.

Could someone take a look at this? In addition, I would be very interested in truly understanding the underlying purpose of these computations. This isn't the way I originally learned determinants, but I am sure there is more meaning to it than brute-force computation.

Best Answer

This is because the matrix in the relationship $AB=BA=\det A\cdot I$ is not the cofactor matrix but the adjugate matrix, the transpose of the cofactor matrix.

This works because the product yields the Laplace expansion (also called the cofactor expansion) of the determinant in the diagonal elements and the Laplace expansion of the determinant of a matrix with identical rows or columns (and thus $0$) in the off-diagonal elements.

Related Question