Exponential Sums – Why So Bad at Solving Easy Problems?

additive-combinatoricsanalytic-number-theoryexponential-sums

Exponential sums are a powerful tool in additive combinatorics and number theory. In my understanding, when it comes to estimate the cardinality of a certain set, exponential sums are (essentially) used in this way: (1) the indicator function of the set is replaced by an appropriate exponential sum; (2) the sum of the indicator function and the sum of the exponential sum are swapped; (3) the main term is extracted; 4) the error term is bounded.

However, recently I stumble upon some simple additive problems for which the exponential-sums approach seems to fail miserably. My (somehow philosophical) question is: Am I applying exponential sums in the wrong way? (And, in such a case, how should apply them to solve such problems?) Or are exponential sums are not good for these problems? (And, in such a case, what is the reason it is so?)

Below an extremely simple example in which exponential sums seem to fails. Of course, I do not really care about solving such problem with exponential sums, but it is just for the sake of example.

Let $1 \leq h \leq m$ be fixed integers. We want to estimate the number $C$ of $(x, y) \in \{0,\dots,h-1\}^2$ such that $x \equiv y \bmod m$. Obviously, $C = h$. Proceeding with exponential sums, we have:
$$C = \sum_{0 \leq x < h} \sum_{0 \leq y < h} \begin{cases} 1 & \text{ if } x \equiv y \bmod m \\ 0 & \text{ if not} \end{cases}$$
$$\stackrel{(1)}{=} \sum_{0 \leq x < h} \sum_{0 \leq y < h} \frac1{m} \sum_{0 \leq k < m} \exp\Big(\frac{2 \pi i (x – y)k}{m}\Big)$$
$$\stackrel{(2)}{=} \frac1{m} \sum_{0 \leq k < m} \sum_{0 \leq x < h} \sum_{0 \leq y < h} \exp\Big(\frac{2 \pi i (x – y)k}{m}\Big)$$
$$\stackrel{(3)}{=} ???$$
In step (3) comes the problem. It is not clear what the main term of $\sum_{0 \leq k <m}$ is. The term $k = 0$ surely is not, since it equals $h^2 / m$. In fact, all the terms are positive an equal to $|\sum_{0 \leq x < h} \exp\Big(\frac{2 \pi i x k}{m}\Big))|^2$.
One can compute explicitly all the geometric sums and get the result, but that is unwieldy.

Best Answer

The basic example of exponential sums in Number Theory is to count solutions to an equation such as $f(x_1,\cdots,x_n)\equiv 0$ $\mod p$ where $p$ is prime this is because $p$-adic solutions are a necessary condition for solutions in integers and also for getting methods like the Hardy_Littlewood circle method to work. Essentially this is an example of the Hasse principle - local solutions give a global solution in certain circumstances.

In this case the main term from your exponential sum, given by setting $k=0$ in your problem, is the average number of solutions in the following sense. Pick values for the $x_i$'s ranging over all values $\mod p$. This gives $p^n$ possible values for $f$. We now make the assumption that these values are approximately evenly distributed $\mod p$. It then makes sense that $p^n/p=p^{n-1}$ should be a good estimate for the total number of solutions to the congruence $f\equiv 0$.

The method of exponential sums then proceeds by showing that the error term given by the rest of the sum is small in comparison with the main term and this gives you solutions for large enough $p$. The classical example of this is for diagonal equations $a_1x_1^k+\cdots +a_nx_n^k \equiv 0$. (Andre Weil analysed these in his paper where he stated the famous Weil Conjectures)

You can also use similar ideas to solve equations over restricted sets of variables. In your case case, $f(x_1,x_2)=x_1-x_2$ solving $\mod m$ with a set $\{0,\cdots h-1\}$. The same idea gives you a main term of $h^2/m$. The problem is you don't have equi-distribution for the number of solutions of $x_1-x_2=r$ as $r$ varies unless $h$ is almost the same size as $m$ and in that case $h^2/m\approx h$ giving you an accurate estimate of the number of solution.

In this case of large $h$ the error term is also small so the standard approach of exponential sums does actually work.

Note that the reason why exponential sums breaks down is because the fourier coefficients of your set $\{0,\cdots h-1\}$ are not small if $h$ is small compared to $m$.

We can relate this to a classic example from additive combinatorics - Roth's Theorem for arithmetic progressions of length 3. we have $f(x_1,x_2,x_3)=x_1-2x_2+x_3$ and the exponential sum method allows us to count length 3 aps $\mod n$ where x_i are taken from sets $A,B,C$ which are reasonably dense in $\{1,\cdots,n\}$. Then we can express the solution count in the same way with the main term being $|A|B||C|/n$ and the error term a sum of fourier coefficients of the characteristic functions, $N^2\sum_{s} \widetilde{A}(s) \widetilde{B}(-2s) \widetilde{C}(s)$. If the fourier coefficients are small indicating that the sets are uniformly distributed in a certain sense, often called pseudo-random, then we can show that the error term is small and solutions exist.

Again, the above does not work if the sets are not sufficiently pseudo-random for the main term to approximate the number of solutions and in this case one has to use alternative ideas to show that a set with a large fourier coefficient is not uniformly distributed and hence concentrated in some arithmetic progression $\mod m$ which allows one to iterate. This is described very elegantly in Tim Gowers' papers on Szemeredi's Theorem.

Others know these ideas much better than I do! I just wanted to give some background and note the importance of pseudo-randomness in the standard number theoretic application of exponential sums but also that if this fails there are techniques that one can apply to deal with the case where the sets are more structured.

Related Question