I know eigenvalues of a graph's adjacency matrix are very heavily studied because of their relationship to the structure of the graph, but how come no one studies the eigenvectors? Even if they don't say much about the structure, isn't it possible that they might say enough to avoid the problem of isospectral graphs?
Why aren’t the eigenvectors of a graph’s adjacency matrix useful for the graph isomorphism problem
computational complexitygraph theorygraph-isomorphismspectral-graph-theory
Related Solutions
The abstract point of view is that the graph is characterized by its adjacency matrix up to permutation. When we view the adjacency matrix as an abstract linear transformation, we are looking at it up to conjugation, which is stronger. So we lose some information, but the information that is left is still interesting. And any time we can apply linear algebra to a situation, that is a good thing because linear algebra is really easy compared to almost anything else. (This sounds trite, but it is one of the most-used principles in mathematics.)
The eigenvalues of the adjacency matrix describe closed walks on the graph. More precisely, if $A$ is the adjacency matrix, then the total number of closed walks on the graph of length $n$ is given by $\text{tr } A^n$, which is the same as $\sum_i \lambda_i^n$ where $\lambda_i$ are the eigenvalues. So knowing the eigenvalues says something about how closed walks grow. Unfortunately this perspective does not provide a direct interpretation of the eigenvectors.
Spectral methods apply particularly well to graphs with a lot of structure, such as strongly regular graphs. In the best case one can write down a matrix equation the adjacency matrix satisfies, and analyzing what this says about the eigenvectors and eigenvalues puts strong constraints on the graph. This is the mechanism behind the standard proof of the classification of Moore graphs of girth $5$.
There is a fairly concrete interpretation of the eigenvalues and the eigenvectors of the Laplacian matrix, as I explained in this math.SE answer. The short version is that one can set up three natural differential equations using the Laplacian, and these all give concrete interpretations of the eigenvalues and eigenvectors:
- The wave equation. Here the eigenvectors are the "harmonics" of the graph: they describe the standing wave solutions of the wave equation, and the eigenvalues describe the frequencies at which these waves resonate.
- The heat equation. Here the eigenvectors are still the "harmonics" of the graph, but the eigenvalues describe the rate at which heat decays for each harmonic. Morally this is about "Brownian motion" on the graph, so it is related to random walks.
- The Schrödinger equation. Here the eigenvectors are energy eigenstates of a continuous-time quantum random walk, and the eigenvalues are (up to a constant) energy eigenvalues.
When the graph is regular, the Laplacian and adjacency matrix are related in a very simple way; the eigenvectors are the same, and the eigenvalues can be easily related to each other. Things are more complicated in the irregular case.
Your definition of automorphism is a bit too strong, I feel. Leaving something "unchanged" in the sense that the numbers in each cell of a matrix is a VERY strong notion of equality. However, you're not using that definition of equality for graphs.
As corrected by Morgan Rodgers, strict equality actually works fine. The adjacency matrices are indeed unchanged under automorphisms when the adjacency matrices are constructed with the same vertex set and the vertex $v_i$ corresponds to column and row $i$ always.
Everything below still holds, but any mention of row-column swapping may be trivially true.
If a graph undergoes an automorphism, it has been turned into another graph with equivalent structure.
$G_1 = (V_1, E_1)$
$G_2 = (V_2, E_2)$
$f(G_1) = G_2$ is an automorphism if and only if:
- There exists a bijection $g : V_1 \to V_2$ such that $(x, y) \in E_1$ if and only if $(g(x), g(y)) \in E_2$
This is not the same as "$f(G_1) = G_2$ if and only if $V_1 = V_2$ and $e \in E_1 \iff e \in E_2$", which would be true equality. This stricter definition is closer to the one you were holding for adjacency matrices. Essentially, each index in each of the vertex and edge is "unchanged", therefore under this equality any swapping at all is not an automorphism. This is not what you intend at all, I'm fairly certain.
If you consider that you're already constructing equivalence classes between graphs (otherwise, the only automorphism on graphs is the identity map), then you can similarly construct an equivalence class on the adjacency matrices.
You define a new $=$ relation such that $A_1 = A_2$ if and only if $A_2$ can be attained by a sequence of $(i,j)$ row-column swaps (swap the $i,j$ rows and $i,j$ columns) from $A_1$. For any finite graph, this is verifiable exhaustively by trying every possible sequence of swaps to check equality. This equality relation is far more relaxed than cell-by-cell equality and it gives you exactly what you need to verify automorphisms between the graphs these adjacency matrices represent.
This equality relation is consistent with any automorphism on your graph. If your automorphism swaps the $i,j$ vertices with each other, this bijectively maps to an automorphism in your adjacency matrices by swapping the $i,j$ row-columns. The reverse also holds following similar logic.
Any automorphism on your graphs can be factored into a minimal chain of single-swap automorphisms. Any automorphisms on your adjacency matrices can be constructed by composition of single-swap automorphisms (closure property of automorphisms). As such, there is a unique automorphism on the graphs for each automorphism on the adjacency matrices. The reverse is also true following similar logic.
We now have a bijective map from the automorphism group of the graphs to the automorphism group of their adjacency matrices. Therefore, the groups are equivalent.
Best Answer
You have to realize that graph isomorphism is very easy for the typical case, and hard in very unusual cases: where we are comparing two graphs that are very similar, and also highly symmetric.
In the typical case, if the graphs are not isomorphic, the eigenvalues are already likely to tell us that. (Usually, that's not how graph isomorphism algorithms proceed, though; they try to identify canonical ways to partition vertices into different types, and refine those partitions, in a less algebraic and more combinatorial way.)
In highly symmetric cases, looking at the eigenvectors is much more difficult, because the eigenvectors can be rewritten in many ways:
Asking "are these two sets of eigenvectors the same, up to the various symmetries we have?" is in many ways the same problem as asking "are these two adjacency matrices the same, up to a permutation of the vertices?" So we haven't made the problem easier by finding the eigenvectors.
I can imagine that, situationally, eigenvectors might be useful. For example, if you compute the eigenvalues and find one (but not one of the useless ones) with multiplicity $1$, then you could look at its eigenvector and label each vertex with the corresponding value. The values might not all be different, but this would give us a starting partition to feed into a stronger isomorphism algorithm.
I don't know that these algorithms don't do that already! But it seems like one of many possible heuristics with many branching cases, and a very complicated isomorphism algorithm might want to pull this one out if the circumstances warrant.