Probability that there are two “small” vertices connected by an edge in the random graph

asymptoticsgraph theoryprobabilityrandom-graphs

Consider the binomial Erdős–Rényi random graph $G(n,p)$ with $p=\frac{\ln n}{n}$. Let a vertex of $G_{n,p}$ be small if its degree is less than $\frac{\ln n}{100}$. I need to show that a.a.s. there is no edge of $G_{n,p}$ joining two small vertices.

I used the Chernoff bound to show that the probability of a given vertex being small is $$\sim P\left( \mathrm{Bin}\left(n,\frac{\ln n}{n}\right)\leq \frac{\ln n}{100} \right)\leq e^{-\frac{(0.99\ln n)^2}{2\ln n}}=e^{-c\ln n}=n^{-c}$$ where $c=\frac{0.99^2}{2}=\frac{1}{2}-\varepsilon$.

Therefore, given two vertices $v_1,v_2$, the probability that they are both small and also connected by an edge is $$O\left( \frac{\ln n}{n}\cdot n^{-2c} \right)=O\left( \frac{\ln n}{n^{2-2\varepsilon}}\right).$$ But unfortunately it's not $o(n^{-2})$ so I cannot use the union bound to show that the probability of having such $v_1, v_2$ is $o(1)$.

My suspicion is that my upper bound on the probability of being small is not sufficiently tight, or alternatively that something more sophisticated than the union bound should be used over different pairs $v_1, v_2$. I would appreciate a direction or a hint to bridge over my $n^{2\varepsilon}$ gap.

Best Answer

The specific Chernoff bound you're using is just not as good in this case. This is not coincidence: when we write a bound like $\Pr[X < (1-\delta)\mu] < e^{-\delta^2\mu/2}$, the constant $\frac12$ in the exponent is the best thing we can put there as $\delta \to 0$, and the bound is meant to take advantage of $\delta^2$ being much smaller than $\delta$. But here, $\delta$ is quite far from $0$, and getting to use $\delta^2$ instead of $\delta$ is not very rewarding.

You can use the following usually-horrible form of the Chernoff bound (as seen on Wikipedia, for instance): $$ \Pr[X < (1-\delta)\mu] < \left(\frac{e^{-\delta}}{(1-\delta)^{1-\delta}}\right)^\mu. $$ Here, $\delta$ is a fixed constant of $\frac{99}{100}$, and $\mu = \ln n$, so it's enough to check that $\frac{e^{-\delta}}{(1-\delta)^{1-\delta}} < e^{-1/2}$ and therefore the bound we get is better than $e^{-1/2 \ln n} = n^{-1/2}$ by a constant in the exponent. In fact, numerically we can check that for $\delta = \frac{99}{100}$, $\frac{e^{-\delta}}{(1-\delta)^{1-\delta}} < e^{-0.94}$, so the bound we get is better than $n^{-0.94}$.

From there, proceed as you have.