[Math] The Matrix-Tree Theorem without the matrix

co.combinatoricsgraph theoryteaching

I'm teaching an introductory graph theory course in the Fall, which I'm excited about because it gives me the chance to improve my understanding of graphs (my work is in topology). A highlight for me will be to teach the Matrix-Tree Theorem, which I think is the only place that linear algebra is used in the course.

Let κ(G) denote the number of spanning trees of G (the complexity of G), and let L(G) denote the Laplacian matrix of G. Finally, for a vertex v of G, let L(v|v) denote the Laplacian matrix with the row and column corresponding to v deleted.

Matrix-Tree Theorem: κ(G)= det L(v|v).

It seems a shame for Linear Algebra to be a prerequisite for my course. Anyway, I don't expect most of my students to be great Linear Algebra experts. Because my students might not remember things like Cauchy-Binet, but mainly so that I myself feel that I can really understand what I'm teaching, I wonder how the Matrix-Tree Theorem could be proven without ever mentioning matrices. On a planet with strong combinatorics and where linear algebra had not been discovered, how would they prove the Matrix-Tree Theorem?

The RHS of the Matrix-Tree Theorem makes sense without ever mentioning matrices, via the Lindström-Gessel-Viennot(-Karlin-MacGregor) Lemma. Construct a graph H, with a source and a sink corresponding to each vertex of G, so that the signed sum of edge weights gives the entries of the Lagrangian matrix for G (surely there's a clever standard way to do this!) and define the determinant of H to be the signed sum of all n-tuples of non-intersecting paths from the sources to the sinks. This interpretation of the determinant seems a nice topic to teach. Maybe there's even an easier way.

Cauchy-Binet becomes an elementary property of sets of non-intersecting paths in H, but I can't see how to free the rest of the proof of the Matrix-Tree Theorem from linear algebra.

Question: Is there a proof of the Matrix-Tree Theorem which makes no mention of matrices? Is it simple enough that one could teach it in an introductory graph theory course?

Note 1: If the purpose of the Matrix-Tree Theorem is to efficiently calculate the complexity of a graph, then dodging linear algebra is surely counterproductive, because a determinant can efficiently be calculated in polynomial time. But, quite beside from my interest in "correctly-tooled" proofs and my teaching goals, I have vague dreams which are almost certainly nonsense about better understanding the Alexander polynomial as a quantum invariant along the lines of this question which might conceivably factor through the Matrix-Tree Theorem (or rather some version of Kirchhoff's formula), and I think I clearly want to stay in the diagrammatic world.

Note 2: This excellent post by Qiaochu Yuan suggests that the answer to my question is in Aigner. I'm travelling right now, and the relevant section isn't on Google Books, so I don't have access to Aigner for the next few weeks. If memory serves, Aigner still uses linear algebra, but maybe I'm wrong… anyway, if the answer turns out to be "look in Aigner" then please downvote this question, or if you could summarize how it's done, I would be happier. The answer will surely turn out to be "look in [somewhere]" anyway, because surely this is easy and well-known (just I don't know it)…

Best Answer

A combinatorial proof of the matrix-tree theorem can be found in the paper by D. Zeilberger A combinatorial approach to matrix algebra, Discrete Math. 56 (1985), 61–72. The proof uses only the interpretation of the determinant as an alternating sum over permutations. Each term corresponds to a digraph, and the digraphs that do not correspond to trees cancel.