You have this backwards.
If $\boldsymbol x$ is an eigenvector of the greatest eigenvalue in one graph, and $\boldsymbol y$ is an eigenvector of the greatest eigenvalue in the other graph, and we normalize them to have $\|\boldsymbol x\| = \|\boldsymbol y\| = 1$, then it's valid to compare their entries. The condition $\|\boldsymbol x\| =1$ (with respect to any norm you like, as long as it's the same for both vectors) ensures that the "total amount of centrality" is fixed, which puts $x_v$ on a fixed scale. Essentially, $x_v$ measures the influence of vertex $v$ as a fraction of the total influence of all vertices.
It's meaningful to compare these: if $x_v > y_v$, then $v$ is more influential in the first graph than in the second.
On the other hand, if $\lambda$ is the eigenvalue of $\boldsymbol x$ in one graph, and $\lambda'$ is the eigenvalue of $\boldsymbol y$ in the other graph, then comparing $\lambda\boldsymbol x$ to $\lambda' \boldsymbol y$ is an unfair comparison. Maybe $\lambda$ is much smaller than $\lambda'$. In that case, comparing $\lambda \boldsymbol x$ to $\lambda'\boldsymbol y$ will tell you that even the most central vertex of the first graph is less central than it is in the second graph.
For example, you might be comparing a webpage's ranking at two different points in time. (For simplicity, let's say that all webpages exist at both times.) Suppose that the internet has gotten more connected overall; then $\lambda$ will increase over time. Comparing $x_v$ to $y_v$ will tell you if a webpage $v$ has gotten more or less influential. Comparing $\lambda \boldsymbol x$ to $\lambda' \boldsymbol y$ might just end up telling you that all webpages have gotten more influential, which would be nonsense.
Or maybe it wouldn't be. Depending on your application, you might care about different things from your eigenvalue measure. But in this case, you should start first by thinking about what those things are. What properties do you want from your cross-graph comparisons?
Here's an interesting example. Compare the following graphs:
- The complete graph $K_{10}$, with $\lambda=9$ and $\boldsymbol x$ proportional to $(1,1,1,1,1,1,1,1,1,1)$.
- The star graph $K_{1,9}$, with $\lambda=3$ and $\boldsymbol x$ proportional to $(3,1,1,1,1,1,1,1,1,1)$.
Let $v$ be the vertex of degree $9$ in both graphs. If we normalize $\boldsymbol x$ by its $2$-norm and compare, then $v$ is more central in $K_{1,9}$. But if we scale by $\lambda$ first, then $v$ is more central in $K_{10}$. Which of these is reasonable?
Best Answer
For intuition, we want to formulate eigenvector-finding as an optimization problem.
Let $A$ be any symmetric matrix. If we minimize $\frac{\mathbf x^{\mathsf T}A \mathbf x}{\mathbf x^{\mathsf T} \mathbf x}$ over all nonzero $\mathbf x$ (or, equivalently, minimize $\mathbf x^{\mathsf T}A \mathbf x$ over all $\mathbf x$ with $\|\mathbf x\|=1$), then we get the smallest eigenvalue back, with $\mathbf x$ being its eigenvector.
This is true for the Laplacian matrix $L$, except this minimum won't be very interesting: the smallest eigenvalue is always $0$, and $(1,1,\dots,1)$ is always an eigenvector. We can look at the second-smallest eigenvalue instead, by adding an extra constraint on $\mathbf x$: we can ask that $x_1 + x_2 + \dots + x_n = 0$, so that it's perpendicular to the eigenvector of the smallest eigenvalue.
So now we have a description of the Fiedler vector of the graph: it is the vector $\mathbf x$ that minimizes $\mathbf x^{\mathsf T}L\mathbf x$ subject to $\|\mathbf x\|=1$ and $x_1 + x_2 + \dots + x_n = 0$. To make this more helpful, note that $\mathbf x^{\mathsf T}L\mathbf x$ can be written as $\sum_{ij \in E} (x_i - x_j)^2$.
The conditions $\|\mathbf x\|=1$ and $x_1 + x_2 + \dots + x_n = 0$ tell us that the Fiedler vector has to have "enough" positive and negative components; they can't all be the same. Since we're minimizing $\sum_{ij \in E} (x_i - x_j)^2$, we want to make the components on any two adjacent vertices as close together as possible.
So the Fiedler vector ends up painting the graph in a gradient that goes from positive to negative. Each individual value $x_i$ doesn't mean much by itself. But the relative values do: clusters of vertices that are close together get similar values, and far-apart vertices often get different values.
For the next eigenvector, we will add an additional constraint to our problem: we'll be looking for a vector $\mathbf y$ perpendicular to the Fiedler vector. Essentially, this says that $\mathbf y$ should have similar properties, but be different from the thing we found just now, describing a different feature of the graph.
For example, if our graph has three big and sparsely connected clusters, the Fiedler vector might assign positive values to one cluster and negative values to the other two. The next eigenvector might choose a different cluster to separate from the other two clusters. This distinguishes all the clusters, so the eigenvector after that will have to find some inter-cluster separation...