Deligne-Type Exponential Sum Estimates – Why Are They Hard to Use?

ag.algebraic-geometryanalytic-number-theoryexponential-sums

Let $F$ by a finite field, and $R(x_1,x_2,\ldots,x_n) := r_1(x_1,x_2,\ldots,x_n)/r_2(x_1,x_2,\ldots,x_n)$ a rational function in $n$ variables, frequently in analytic number theory or harmonic analysis one comes across the need to estimate the exponential sum$$ S:= \sum_{x \in F^n} = e( R(x_1,x_2,\ldots,x_n) ). $$
In the one variable case, a result of Bombieri (using Weil's proof of RH for curves) gives a nearly complete understanding in particular:
$$|S| \lesssim |F|^{1/2} $$if the degree of $deg(r) = deg(r_1) + deg(r_2)$ is greater than one, where the constant only depends on (things that depend on) the degree of $f$.
In the multivariate case one still hopes for estimates of the form $|S| \lesssim |F|^{n/2}$ unless something pathological happens (with again perhaps a reasonable dependence of the implied constant on $r$).

There is a whole literature of such estimates stemming from Deligne's work on the Riemann Hypothesis for varieties by authors such as Katz, Hooley, Adolphson-Sperber, Kowalski, Fouvry, Michel, etc. Unfortunately, the hypotheses of these theorems often involve complicated concepts from algebraic geometry that make it difficult for the non-algebraic geometer to understand (much less apply). To take one example, a theorem of Hooley gives a rather general approach to the bivariate case, but then you need to say things about the geometry of the curves $r_i(x,y) – t = 0$ in the closure $\overline{F}$ which do not seem easy to analyze. It seems that things only get more complicated in more variables, with theorem statements involving even more exotic concepts (e.g. sheafs, mondromy, etc). 

My (vague) question is: Why aren't simpler theorem hypotheses possible?
I welcome any attempts to explain this. But for concreteness, let me ask two better posed variants of this:

[1] My sense is that two sources of complications arise from (1) trying to have an explicit dependence of the implied constant on $r$, and (2) exceptional conspiracies between $r$ and the field. If we only want some very crude constant (depending on the degree of $r$) and an estimate for sufficiently large characteristic for a fixed function do things get simpler?

[2] In what ways can square root cancellation really fail asymptotically for rational (or just polynomial) sums? We know the answer is basically never in one variable thanks to Bomberi's theorem. In the multivariable case, the only sums I'm aware of have dummy variables that can be factored out. By which I mean something like$$\sum_{x,y} e(x^2 +2xy + y^2) = \sum_{x,y} e( (x+y)^2 ) = |F| \sum e(x^2) \sim |F|^{3/2}.$$ Are there functions where asymptotic square root cancelation fails for a `deep' reason that can't be explained elementarily by properties of the sum (obviously one can always save at least a factor of $|F|^{1/2}$ by fixing all but one variables)?

Best Answer

There are a lot of subtle reasons such exponential sums can fail to exhibit square-root cancellation. First let me comment on two reasons suggested in your answer:

(1) trying to have an explicit dependence of the implied constant on $r$

This should never be a problem. Katz proved a while ago simple explicit Betti bounds that suffice for problems of this type, so in any kind of sophisticated cohomological argument the explicit constant should be no worse than one arising from Katz (which will depend only on the degree and number of variables, but not be incredibly far from the optimum).

(2) exceptional conspiracies between 𝑟 and the field.

This can indeed pose a problem, and removing it can simplify things - basically, it should almost always let us use only the classical, characteristic zero topological versions of sheaves and monodromy and such instead of their exotic, étale cohomological incarnations, but not remove them entirely.

Why is there so much difficulty in the statements of these results, and a seeming need to invoke difficult geometric concepts?

One reason, or at least one way of shedding light on the difficulty, is that the same geometric tools (sheaves, monodromy, and more elementary geometric concepts) that let us prove upper bounds for exponential sums in some cases let us prove lower bounds in other cases. So we can rig up exponential sums that fail to exhibit square-root cancellation for more or less subtle geometric reasons. Thus any bound requires, one way or another, avoiding these examples.

Here's a simple one. The sum

$$ \sum_{a \in F} \left(\sum_{x_1,x_2\in F} e( x_1 +x_2 +a /(x_1x_2))\right)^n,$$ which can be expanded into a $2n+1$-variable sum, admits full square-root cancellation if and only if $n$ is divisible by 3.

Why? The monodromy computed by Katz of the hyper-Kloosterman sum is $SL_3$. Katz's computation shows that the sum over $x_1,x_2$ always has full square-root cancellation, but the sum over $a$ cancels if and only if the $n$th tensor power of the standard representation of the monodromy group admits an invariant vector, which happens if and only if $n$ is divisible by $3$.

One, more elementary, example, arose in a conversation with a couple mathematicians who were specifically interested in character sums of products of linear factors. I therefore considered:

$$ \sum_{x,y \in F} \chi \left( xy \prod_{\zeta_1,\zeta_2\in \mu_3} (1 + \zeta_1 x+\zeta_2 y) \right) $$

where $F$ is a field of size congruent to $1$ mod $6$, $\chi$ is a character of the multiplicative group of $F$ of order $2$ (i.e. a Legendre symbol - certainly something that appears often in analytic number theory - if $|F|$ is prime) and $\mu_3$ are the third roots of unity in $F$.

This product of linear factors can actually be expressed as a rational function of $\frac{xy}{ x^3+y^3+1}$ by an algebraic manipulation related to the Dwork family and expressed in that variable one can see that it does not admit square-root cancellation.

So for any bound on multiplicative character sums one somehow needs to rule out examples like this one.

A third example concerns sums of the special form $$\sum_{x_1,\dots,x_n\in F, y\in F^*} e( y f(x_1,\dots, x_n))$$ where one can trivially eliminate the variable $y$ but this just reduces one to the not-obviously-simpler problem of counting zeroes of $f$. Counting zeroes of $f$ can be as hard as counting points on basically any algebraic variety and sum - like abelian varieties - do not exhibit square-root cancellations for reasons that are geometrically meaningful (in this case, related to the topology of a complex torus) but not at all apparent from staring at the equations.

Related Question