Number Theory – Iterated Logarithms in Analytic Number Theory

analytic-number-theorynt.number-theory

As all analytic number theorists know, iterated logarithms ($\log x$, $\log \log x$, $\log \log \log x$, etc.) are prevalent in analytic number theory. One can give countless examples of this phenomenon. My question is, can someone give an intuitive account for why this is so? Specifics regarding any of the famous theorems involving iterated logarithms are welcome. Many thanks!

EDIT: Thank you so much for the answers so far! I'm still trying to get a better intuition on how a $\log \log \log$ or $\log \log \log \log$ arises, especially in Littlewood's 1914 proof that $\pi(x)-\operatorname{li}(x) = \Omega_{\pm} \left(\frac{\sqrt{x}\log \log \log x}{\log x}\right) \ (x \to \infty)$ or Montgomery's conjecture that $\limsup_{x \to \infty}\dfrac{\lvert\pi(x)-\operatorname{li}(x)\rvert}{\;\frac{\sqrt{x}\, (\log \log \log x)^2}{\log x}\;}$ is finite and postive. I admit to knowing nothing (yet) about sieve theory, so I will have to dive into the proof of the prime gap theorem by Tao, Maynard, et. al. Can someone give a more precise account of how the $\log \log \log$ or $\log \log \log\log$ arises in the proof? I'm very familiar with why occurrences of $\log \log$ happen, but once you get to $\log \log \log$, I'm still a bit mystified. Also, is there a good introduction to sieve theory where I could start, or should I just dive right in to the papers on large prime gaps?

FURTHER EDIT: Can someone also explain intuitively the reason for the $\log \log \log$ in Littlewood's theorem? Historically, was this the first occurrence of a triple log in number theory?

Best Answer

There are two main sources of repeated logs. (These sources can be further refined into natural subcategories, but I'll only mention a couple of those subcategories.) Those two main sources are:

Type 1: Repeated logs occur because that is just the truth of the matter.

One of my favorite examples is a 2008 theorem of Kevin Ford, solving the multiplication table problem. The theorem states that $$ |\{a\cdot b\, : a,b\in \{1,2,\ldots,N\}\}|\asymp \frac{N^2}{\log(N)^c(\log\log(N))^{3/2}}, $$ where $c=1-\frac{1+\log\log(2)}{\log(2)}$.

Lest you believe that the $(\log\log(N))^{3/2}$ factor is a consequence of this being a 2-dimensional problem, it also shows up in the other dimensions. See this other question for more information.

In some cases it is much easier to see where these extra log's come from. For instance, when turning sums over integers into sums over primes, this often leads to an extra log coming into force, just from the nature of the problem at hand and asymptotics with primes.

For instance, we have $$ \sum_{n=1}^{N}\frac{1}{n}=\log(N)+\gamma+o(1)\ \text{ while }\ \sum_{p\leq N,\ p\text{ prime}}\frac{1}{p}=\log\log(N)+B+o(1), $$ where $\gamma$ and $B$ are well-known constants. These two asymptotics can be thought of as discrete version of the integral equalities $$ \int\frac{1}{x}\, dx = \log(x) \ \text{ while }\ \int\frac{1}{x\log(x)}\, dx=\log\log(x). $$

Since primes occur all over number theory, and they also come weighted with an extra log factor, this often contributes extra double-log factors.

Type 2: Repeated logs occur as an artifact of our current best machinery.

For example, Rankin showed in 1938 that the largest prime gap below $N$, for $N\gg 0$, is at least $$ \frac{1}{3}\frac{\log(N)\log\log(N)\log\log\log\log(N)}{(\log\log\log(N))^2}. $$ These extra logs happen when optimizing inequalities, and when using the known machinery of the day. But they do not represent a fundamental truth about the problem. The constant $\frac{1}{3}$ has been slowly improved. Recently, in 2014, Ford, Green, Konyagin, and Tao improved this bound, and Maynard also did so independently, by replacing the fraction $\frac{1}{3}$ by an arbitrary number. See this preprint and this other preprint for more details. Later, these five mathematicians together removed the square from the denominator.

If you read the proofs, the logs are coming from the current state of the art sieve methods, together with bounding techniques. When you solve for the best fit functions to undo some of the exponentiation that occurs in calculations, the logs just fall out.

In these types of problems, it is not inconceivable (and actually occurs quite regularly) that one new idea is applied to the problem, and the asymptotic changes (sometimes involving more multi-log factors, to account for the small additional room for improvement that was gained). What is surprising about Rankin's bound is that even though it is far from the predicted asymptotic, each extra idea only changed the constant out front---at least until recently.


Edited to add: Working through a well-written proof will, of course, give a deeper understanding of where iterated logs arise in the problem at hand. That is certainly true for the three examples above.

However, if you are not yet familiar with sieve theory, or the circle method, I wouldn't recommend working through the big proofs of those theorems mentioned above and in your question (at least, not initially). Rather, I would recommend starting with an introductory text on sieves, such as Cojocaru and Murty's book "An Introduction to Sieve Methods and their Applications". Double-logs occur almost at the very beginning. Triple-logs show up in the exercises in Chapter 5 (and perhaps earlier). Indeed, problem 25 is a typical example of how a triple log is introduced to improve an asymptotic.