Proof explanation that the Riemann hypothesis implies $L(x) = O(x^{\frac12 + \varepsilon})$, where $L(x) = \sum_{n\leq x}\lambda(n)$

analytic-number-theorynumber theoryproof-explanationriemann-zeta

$\lambda(n)$ is the Liouville function. According to the first proof in the paper "The distribution of weighted sums of the Liouville function and Pólya’s conjecture", the Riemann hypothesis is equivalent to the statement $L(x) = O(x^{\frac12 + \varepsilon})$.

They prove this by noting that
$$\frac{\zeta(2s)}{\zeta(s)} = s \int_1^\infty{\frac{L(x)}{x^s}\frac{dx}x} = \sum_{n=1}^{\infty}{\frac{\lambda(n)}{n^s}}$$

I under why this means the statement implies the Riemann hypothesis – if $\zeta$ had a zero with a real part $\frac12+\varepsilon$, while $L(x) = O(x^{\frac12 + \frac\varepsilon2})$, the left side would be infinite while the right side would converge.

However, why is the opposite direction correct? Why can't $L$ alternate signs a lot, and be large only on very rare occasions, such that even though $\frac{L(x)}{x^s}$ is unbounded, $\int_1^\infty{\frac{L(x)}{x^s}\frac{dx}x}$ still converges?

Best Answer

The fundamental reason for which RH implies $L(x) = O(x^{\frac12 + \varepsilon})$ (or $M(x) = O(x^{\frac12 + \varepsilon})$ where $M$ is the Mobius function summatory) is that under RH one has $1/\zeta(s)=O_{\epsilon}(|t|^{\epsilon}), s=\sigma+it$ uniformly in $\sigma \ge \sigma_0>1/2$ (and similarly for $\zeta$ of course).

This is a classic result which follows by applying the Three Circles Theorem of Hadamard to $\log \zeta$ on carefully selected domains to the right of $\Re s >1/2$, where RH is essential in implying that $\log \zeta$ is analytic there, but otherwise only general (and easy to prove stuff) about $\zeta$ like polynomial bounds on vertical lines and boundness on $\sigma \ge 1+\delta$ are needed.

Then one can apply another classic result of Landau (though less well known and repeatedly reproved using the Perron formula in various textbooks on zeta and ANT) which says that if $f$ is analytic on a half plane $\Re s > a$, is given by a Dirichlet series $f(s)=\sum a_n n^{-s}$ on $\Re s >b \ge a$ AND $|f(s)|=O_{\epsilon}|t|^{\epsilon}$ uniformly in $\sigma \ge a_0>a$ for any $a_0>a$, then $\sum a_n n^{-s}$ converges on $\Re s >a$

(and is equal to $f(s)$ which is trivial by analytic continuation given the convergence there of the Dirichlet series which is the nontrivial result).

One can think of this as a Tauberian converse result to the direct Abelian result that if $\sum a_n n^{-s}$ converges on $\Re s >a$ then $f(s)=\sum a_n n^{-s}$ is analytic there, as one requires an extra condition $|f(s)|=O_{\epsilon}|t|^{\epsilon}$ uniformly in $\sigma \ge a_0>a$ for any $a_0>a$

Note that the Tauberian condition is essential since $\eta (s)=\sum (-1)^{n-1}n^{-s}$ is entire but given by a Dirichlet series on $\Re s >0$ only.

With the above considerations one gets that RH implies the convergence of $\sum_{n=1}^{\infty}{\frac{\lambda(n)}{n^s}}$ or of $\sum_{n=1}^{\infty}{\frac{\mu(n)}{n^s}}$ up to $\Re s >1/2$ and then from the standard formulas relating the abscissa of a Dirichlet series with the growth of its summary function we get $L(x), M(x) = O_{\epsilon}(x^{\frac12 + \varepsilon})$ which is also proven in the answer by OP here, but that is the easy part. The convergence of the Dirichlet series to abscissa $1/2$ is the difficult stuff.

Related Question