[Math] How does one use the Poisson summation formula

analytic-number-theoryfourier analysis

While reading the answer to another Mathoverflow question, which mentioned the Poisson summation formula, I felt a question of my own coming on. This is something I've wanted to know for a long time. In fact, I've even asked people, who have probably given me perfectly good answers, but somehow their answers have never stuck in my brain. The question is simple: the Poisson summation formula is incredibly useful to many people, but why is that? When you first see it, it looks like a piece of magic, but then suddenly you start spotting that people keep saying "By Poisson summation" and expecting you to fill in the details. In that respect, it's a bit like the phrase "By compactness," but the important difference for me is that I can fill in the details of compactness arguments.

What I would like to know is this. What is the "trigger" that makes people think, "Ah, Poisson summation should be useful here"? And is there some very simple example of how it is applied, with the property that once you understand that example, you basically understand how to apply it in general? (Perhaps two or three examples are needed — that would obviously be OK too.) And can one give a general description of the circumstances where it is useful? (Anyone familiar with the Tricki will see that I am basically asking for a Tricki article on the formula. But I don't mind something incomplete or less polished.)

For reference, here is a related (but different) question about the Poisson summation formula: Truth of the Poisson summation formula

Best Answer

The existing answers list some important situations where Poisson Summation plays a role, the application to proving the functional equation of $\theta$ and hence of $\zeta$ being my personal favourite. My best answer to Tim's question as he actually asked it might be: why not have it in mind to try using it whenever you have a discrete sum that you are having trouble estimating, especially if you fancy your chances of understanding the Fourier transform of the summands. You'll end up with a different sum and it might be a lot easier to understand, and you might even be able to approximate your first sum by an integral (the term $\hat{f}(0)$ in the Poisson summation formula).

To explain a little more with an example, there's a whole theory concerned with the estimation of exponential sums $\sum_{n \leq N} e^{2\pi i \phi(n)}$. There are two processes called A and B that can be used to turn a sum like this into something you might be better positioned to understand. Process A is basically Weyl/van der Corput differencing (Cauchy-Schwarz) and process B is essentially Poisson summation. It's not a very straightforward task to put together a theory of how these processes interact, and how they may best be combined to estimate your sum, and in fact this is in general something of an art. The 10 lectures book by Montgomery contains a nice exposition, and there's a whole LMS lecture note volume by Graham and Kolesnik if you want more details.

I want to share a perhaps slightly obscure paper of Roberts and Sargos (Three-dimensional exponential sums with monomials, Journal fur die reine und angewandte Mathematik (Crelle) 591), in which they use Poisson Summation in the form of Process B mentioned above arbitrarily many times to establish the following rather simple-to-state result: the number of quadruples $x_1,x_2,x_3,x_4$ in $[X, 2X)$ with

$$|1/x_1 + 1/x_2 - 1/x_3 - 1/x_4| \leq 1/X^3$$

is $X^{2 + o(1)}$. In other words, the quantities $1/a + 1/b$ tend to avoid one another to pretty much the same extent as random numbers of the same size. Very very roughly speaking (I don't really understand the argument in depth) the proof involves looking at exponential sums $\sum_x e^{2\pi i m/x}$, and it is these that are transformed repeatedly using Poisson summation followed by other modifications (it being reasonably pointless to try and apply Poisson sum twice in succession).

Related Question