[Math] connection between graphs and the eigenvectors of their matrix representation

graph theorylinear algebramatrices

I am trying to learn graph theory and the linear algebra used to analyse graphs. The texts I have read through have lots of lemmas and theorems proved. The proofs are convincing but I fail to see the intuition behind them. How do they connect to the properties of the graphs?

I understand eigenvectors very clearly when they are taken from a transition matrix for a Markov chain, because very simply they represent a stationary distribution of the matrix. But for the matrices derived from graphs? From the adjacency matrix? Here I cannot understand what relevance the eigenvectors/eigenvalues have to the graph. I cannot imagine using those matrices in the first place for multiplying anything by them.

Maybe I don't completely understand totally the power and depth of eigenvectors/eigenvalues. But I guess this application would reveal more.

Best,

Best Answer

The abstract point of view is that the graph is characterized by its adjacency matrix up to permutation. When we view the adjacency matrix as an abstract linear transformation, we are looking at it up to conjugation, which is stronger. So we lose some information, but the information that is left is still interesting. And any time we can apply linear algebra to a situation, that is a good thing because linear algebra is really easy compared to almost anything else. (This sounds trite, but it is one of the most-used principles in mathematics.)

The eigenvalues of the adjacency matrix describe closed walks on the graph. More precisely, if $A$ is the adjacency matrix, then the total number of closed walks on the graph of length $n$ is given by $\text{tr } A^n$, which is the same as $\sum_i \lambda_i^n$ where $\lambda_i$ are the eigenvalues. So knowing the eigenvalues says something about how closed walks grow. Unfortunately this perspective does not provide a direct interpretation of the eigenvectors.

Spectral methods apply particularly well to graphs with a lot of structure, such as strongly regular graphs. In the best case one can write down a matrix equation the adjacency matrix satisfies, and analyzing what this says about the eigenvectors and eigenvalues puts strong constraints on the graph. This is the mechanism behind the standard proof of the classification of Moore graphs of girth $5$.

There is a fairly concrete interpretation of the eigenvalues and the eigenvectors of the Laplacian matrix, as I explained in this math.SE answer. The short version is that one can set up three natural differential equations using the Laplacian, and these all give concrete interpretations of the eigenvalues and eigenvectors:

  • The wave equation. Here the eigenvectors are the "harmonics" of the graph: they describe the standing wave solutions of the wave equation, and the eigenvalues describe the frequencies at which these waves resonate.
  • The heat equation. Here the eigenvectors are still the "harmonics" of the graph, but the eigenvalues describe the rate at which heat decays for each harmonic. Morally this is about "Brownian motion" on the graph, so it is related to random walks.
  • The Schrödinger equation. Here the eigenvectors are energy eigenstates of a continuous-time quantum random walk, and the eigenvalues are (up to a constant) energy eigenvalues.

When the graph is regular, the Laplacian and adjacency matrix are related in a very simple way; the eigenvectors are the same, and the eigenvalues can be easily related to each other. Things are more complicated in the irregular case.

Related Question