[Math] A direct proof of the properties of the matrix of minors

linear algebramatrices

Let $A$ be an invertible $n\times n$ matrix. Define the matrix of minors $\Delta(A)$ of $A$ to be the matrix whose $(i,j)$ entry is the determinant of the minor of $A$ with the $i$th row and $j$ column omitted.

Certainly, the most important property of $\Delta(A)$ is that it is instrumental in computing the inverse of $A$:
$$ A^{-1}= \frac{1}{\operatorname{det}(A)}\sigma\cdot \Delta(A)^T\cdot \sigma$$
where here, $\sigma$ is the diagonal matrix whose diagonal entries alternate $1$ and $-1$.

From this fact and elementary properties of the inverse, it is easy to prove the following

  • Taking the matrix of minors is an involution up to scaling; that is, $\Delta(\Delta(A))=\operatorname{det}(A)^{2-n}\cdot A$.
  • Taking the matrix of minors is an group homomorphism; that is, $\Delta(AB)=\Delta(A)\Delta(B)$.

If you actually write out either of these identities in terms of minors, you get a series of non-trivial-looking identities on the minors of an invertible matrix.

Is this the easiest way to obtain these identities on minors? It would bother me if it was, for a couple of reasons. First, it seems extremely likely that the second identity above holds for non-invertible matrices. Second, if $A$ is totally positive or non-negative (that is, every minor of every size is positive or non-negative), then so is $\Delta(A)$. However, the inverse of a totally positive matrix will almost never be totally positive, and so the above proof of the identities will stray off the totally positive path.

Therefore,

Can the identities on minors implied by the above relations on $\Delta$ be proven in a more direct way?

Ideally, this proof would be 'subtraction-free', to make it more natural in the totally-positive setting.

Edit: I should be a little bit more clear about what sort of thing I am looking for. I am interested in spaces of totally-positive matrices and the corresponding algebras generated by minors on them (specifically, they are cluster algebras). An important involution of these spaces and algebras is the matrix of minors defined above. Note that taking inverses or cofactors does not preserve totally positivity, and so it doesn't act on this space.

Mainly, I asked this question because I was annoyed at using a map whose properties couldn't be proven without passing to a different, bigger space (not that this doesn't happen all the time in math). I was curious if there was a proof along algebraic lines, following only from the simplest relations on minors (the 3-term Plucker relations).

Best Answer

Combinatorial interpretation

Let's start with a combinatorial interpretation of $\Delta(A)_{i,j}$. It is a weighted sum of terms, each of which corresponds to a directed $1$-regular graph. Each term is the product of the corresponding entries of $A$, only we don't count the $(j,i)$ edge. In other words, the graph is not really $1$-regular, since $i$ has no in-edge and $j$ has no out-edge; so there is an $i$ to $j$ path, and the rest are circles. Each term is weighted according to the sum of the permutation.

Multiplicativity

Now $\Delta(AB)_{i,j}$ is the same thing, only each $(x,y)$ edge really corresponds to a pair of edges $(x,z),(z,y)$, with $z$ arbitrary. Suppose a vertex $z$ appears twice, in both $(x_1,z),(z,y_1)$ and $(x_2,z),(z,y_2)$. Replace this wiring to the wiring $(x_1,z),(z,y_2)$ and $(x_2,z),(z,y_1)$. This corresponds to a transposition, and so the sign is opposite and the two cancel. We are left with only the terms where each vertex appears at most once as an intermediate.

Similarly, in $(\Delta(A)\Delta(B))_{i,j}$ we have two graphs superimposed, with an $i$ to $j$ path composed of an $A$ part followed by a $B$ part. We can now construct a $\Delta(AB)_{i,j}$ path in the obvious way; this is a reversible process (the missing $z$ vertex is the one in the middle of the $i$ to $j$ path). Since sign is multiplicative, the signs agree.

Involution up to scaling

Consider now $(\Delta(\Delta(A))_{i,j}$. On the first level, we have an almost $1$-regular graph, with some $i$ to $j$ path. On the second level, each edge $x,y$ on the first level corresponds to some $1$-regular graph with some $x$ to $y$ path.

Let's count the in- and out-degrees of all the vertices. We start with $n-1$ each. For each edge $(x,y)$, remove $1$ from the in-degree of $x$, and $1$ from the out-degree of $y$. Every vertex other than $i$ has an incoming edge, and every vertex other than $j$ has an outgoing edge. Let's assume that $i \neq j$ (the other case is simpler). Thus the out-degree of $i$ is $n-1$, and of every other vertex it is $n-2$; the in-degree of $j$ is $n-1$, and of every other vertex it is $n-2$.

Suppose the path from $i$ to $j$ is not direct, for example $i$ goes to $k$ goes to $l$ (which might equal $j$). It is possible that if you're cunning enough, you will be able to permute $k$ and $j$ in such a way that the result will have the same value but opposite sign. Given that, we are left only with terms in which the $i$ to $j$ path is direct. The rest of the multigraph is just an arbitrary $(n-2)$-regular graph, which explains the factor $(\det A)^{n-2}$.

It remains for someone in the audience to find the correct cut-and-paste argument...

Related Question