$\lambda_2 \leq \kappa(G)$; why

graph theoryspectral-graph-theory

Let $\kappa (G)$ be the vertex connectivity of graph $G$. Also, let
$\lambda_2$ be the second eigenvalue of the graph Laplacian matrix.
Then we have

$$ \lambda_{2} \leq \kappa(G). $$

One can find such a theorem here in Lemma 4. There is a proof but I don't know exactly why that holds.

Proof: Let $S$ be a set of vertices of size $\kappa(G)$ that
disconnects $G$. Then

$$ \lambda_{2}(G-S) = 0 {\color{blue}\implies} \lambda_{2}(G) \leq 0+\kappa(G). $$

I am curious to know why this relation holds? I need more elaboration on the implied result ${\color{blue}\implies}$.

Best Answer

In general, the idea with upper bounds on Laplacian eigenvalues is to use the following characterization: if $\lambda_1(L) \le \dots \le \lambda_n(L)$ are the eigenvalues of a symmetric matrix $L$, then $$ \lambda_1(L) = \inf\left\{\frac{\mathbf x^{\mathsf T}L\mathbf x}{\mathbf x^{\mathsf T} \mathbf x}: \mathbf x \ne \mathbf 0\right\} $$ and if $\mathbf u^{(1)}$ is the eigenvector corresponding to $\lambda_1(L)$, then $$ \lambda_2(L) = \inf\left\{\frac{\mathbf x^{\mathsf T}L\mathbf x}{\mathbf x^{\mathsf T} \mathbf x}: \mathbf x \ne \mathbf 0, \mathbf x^{\mathsf T}\mathbf u^{(1)} = 0\right\}. $$ The minimum in both cases is achieved by taking $\mathbf x$ to be the corresponding eigenvector.

If we pick a specific value of $\mathbf x$ (satisfying $\mathbf x \ne \mathbf 0, \mathbf x^{\mathsf T}\mathbf u^{(1)} = 0$) that we think is going to be a decent approximation of the eigenvector to $\lambda_2(L)$, and compute $\frac{\mathbf x^{\mathsf T}L\mathbf x}{\mathbf x^{\mathsf T} \mathbf x}$, that gives us an upper bound on the actual value of $\lambda_2(L)$.

In the case where $L$ is the Laplacian matrix of a graph, $\lambda_1 = 0$ and the corresponding $\mathbf u^{(1)}$ is the all-$1$ vector, which makes the optimization problem for $\lambda_2$ easier to state. Being orthogonal to $\mathbf u^{(1)}$ just means all entries of the eigenvector have to add up to $0$.


The bound that $\lambda_2(G) \le \lambda_2(G-S) + |S|$ is obtained in exactly this way. Let $G$ be an $n$-vertex graph, and let $\mathbf x \in \mathbb R^{n-|S|}$ be the second eigenvector of the Laplacian matrix of $G-S$. Then the vector $\mathbf y \in \mathbb R^n$ obtained by padding $\mathbf x$ with $|S|$ zeroes is a good guess at the eigenvector of $\lambda_2$ for the Laplacian matrix of $G$. (And it satisfies the orthogonality condition because the sum of $\mathbf y$'s entries is the same as the sum of $\mathbf x$'s entries, which is $0$.)

Knowing that $$ \frac{\mathbf x^{\mathsf T}L_{G-S}\mathbf x}{\mathbf x^{\mathsf T}\mathbf x} = \lambda_2(G-S) $$ we first want to estimate $$ \frac{\mathbf y^{\mathsf T}L_{G}\mathbf y}{\mathbf y^{\mathsf T}\mathbf y}. $$ In the denominator, $\mathbf y^{\mathsf T}\mathbf y = \mathbf x^{\mathsf T}\mathbf x$ because $\mathbf y$ is just $\mathbf x$ with some zero entries. In the numerator, things are a bit more complicated.

The zero entries of $\mathbf y$ means that the extra rows and columns of $L_G$ that weren't in $L_{G-S}$ (corresponding to vertices in $S$) go away, and $L_G$ mostly agrees with $L_{G-S}$ on the rows and columns they do share. The exception is the diagonal: the diagonal entries of $L_{G-S}$ measure degree in $G-S$, and the diagonal entries of $L_G$ measure degree in $G$. So when we go from $\mathbf x^{\mathsf T}L_{G-S}\mathbf x$ to $\mathbf y^{\mathsf T}L_{G}\mathbf y$, we gain an extra $x_i^2$ (or $y_i^2$) for every extra edge that vertex $i$ had to $S$: increasing the degree of vertex $i$ in $G$ beyond what it had in $G-S$.

But there are at most $|S|$ such edges, and so the extra contribution is at most $|S|x_i^2$ for vertex $i$, and $|S|\mathbf x^{\mathsf T}\mathbf x$ for all vertices of $G-S$ combined. Therefore $$ \frac{\mathbf y^{\mathsf T}L_{G}\mathbf y}{\mathbf y^{\mathsf T}\mathbf y} \le \frac{\mathbf x^{\mathsf T}L_{G-S}\mathbf x + |S|\mathbf x^{\mathsf T}\mathbf x}{\mathbf x^{\mathsf T}\mathbf x} = \frac{\mathbf x^{\mathsf T}L_{G-S}\mathbf x}{\mathbf x^{\mathsf T}\mathbf x} + |S|. $$ Since $\frac{\mathbf y^{\mathsf T}L_{G}\mathbf y}{\mathbf y^{\mathsf T}\mathbf y}$ is an upper bound on $\lambda_2(G)$, and $\frac{\mathbf x^{\mathsf T}L_{G-S}\mathbf x}{\mathbf x^{\mathsf T}\mathbf x}$ is exactly $\lambda_2(G-S)$, we get $$ \lambda_2(G) \le \lambda_2(G-S) + |S|. $$ In particular, if $S$ is a vertex cut for $G$, then $\lambda_2(G-S)=0$, so $\lambda_2(G) \le |S|$.