Jacobi’s Equality Between Complementary Minors of Inverse Matrices – Linear Algebra

determinantslinear algebramatrices

What's a quick way to prove the following fact about minors of an invertible matrix $A$ and its inverse?

Let $A[I,J]$ denote the submatrix of an $n \times n$ matrix $A$ obtained by keeping only the rows indexed by $I$ and columns indexed by $J$. Then

$$ |\det A[I,J]| = | (\det A) \det A^{-1}[J^c,I^c]|,$$
where $I^c$ stands for $[n] \setminus I$, for $|I| = |J|$. It is trivial when $|I| = |J| = 1$ or $n-1$. This is apparently proved by Jacobi, but I couldn't find a proof anywhere in books or online. Horn and Johnson listed this as one of the advanced formulas in their preliminary chapter, but didn't give a proof. In general what's a reliable source to find proofs of all these little facts? I ran into this question while reading Macdonald's book on symmetric functions and Hall polynomials, in particular page 22 where he is explaining the determinantal relation between the elementary symmetric functions $e_\lambda$ and the complete symmetric functions $h_\lambda$.

I also spent 3 hours trying to crack this nut, but can only show it for diagonal matrices 🙁

Edit: It looks like Ferrar's book on Algebra subtitled determinant, matrices and algebraic forms, might carry a proof of this in chapter 5. Though the book seems to have a sexist bias.

Best Answer

The key word under which you will find this result in modern books is "Schur complement". Here is a self-contained proof. Assume $I$ and $J$ are $(1,2,\dots,k)$ for some $k$ without loss of generality (you may reorder rows/columns). Let the matrix be $$ M=\begin{bmatrix}A & B\\\\ C & D\end{bmatrix}, $$ where the blocks $A$ and $D$ are square. Assume for now that $A$ is invertible --- you may treat the general case with a continuity argument. Let $S=D-CA^{-1}B$ be the so-called Schur complement of $A$ in $M$.

You may verify the following identity ("magic wand Schur complement formula") $$ \begin{bmatrix}A & B\\\\ C & D\end{bmatrix} = \begin{bmatrix}I & 0\\\\ CA^{-1} & I\end{bmatrix} \begin{bmatrix}A & 0\\\\ 0 & S\end{bmatrix} \begin{bmatrix}I & A^{-1}B\\\\ 0 & I\end{bmatrix}. \tag{1} $$ By taking determinants, $$\det M=\det A \det S. \tag{2}$$ Moreover, if you invert term-by-term the above formula you can see that the (2,2) block of $M^{-1}$ is $S^{-1}$. So your thesis is now (2).

Note that the "magic formula" (1) can be derived via block Gaussian elimination and is much less magic than it looks at first sight.