[Math] Analogues of P vs. NP in the history of mathematics

big-listcomputational complexityho.history-overviewsoft-question

Recently I wrote a blog post entitled "The Scientific Case for P≠NP". The argument I tried to articulate there is that there seems to be an "invisible electric fence" separating the problems in P from the NP-complete ones, and that this phenomenon should increase our confidence that P≠NP. So for example, the NP-complete Set Cover problem is approximable in polynomial time to within a factor of ln(n), but is NP-hard to approximate to within an even slightly smaller factor (say, 0.999ln(n)). Notice that, if either the approximation algorithm or the hardness result had been just slightly better than it was, then P=NP would've followed immediately. And there are dozens of other examples like that. So, how do our algorithms and our NP-hardness results always manage to "just avoid" crossing each other, even when the avoidance requires that they "both know about" some special numerical parameter? To me, this seems much easier to explain on the P≠NP hypothesis than on the P=NP one.

Anyway, one of the questions that emerged from the discussion of that post was sufficiently interesting (at least to me) that I wanted to share it on MO.

The question is this: When, in the history of mathematics, have problems "like P vs. NP" arisen and then been solved? In those cases, what were the resolutions?

I'd better clarify what I mean by a problem being "like P vs. NP"! I mean the following:

  1. Mathematicians managed to classify a large number of objects of interest to them into two huge classes. (Ideally, these would be two equivalence classes, like P and the NP-complete problems, which they conjectured to be disjoint. But I'd settle for two classes one of which clearly contains the other, like P and NP, as long as many of the objects conjectured to be in $\operatorname{Class}_2 \setminus \operatorname{Class}_1$ were connected to each other by a complex web of reductions, like the NP-complete problems are—so that putting one of these objects in $\operatorname{Class}_1$ would also do so to many others.)

  2. Mathematicians conjectured that the two classes were unequal, but were unable to prove or disprove that for a long time, even as examples of objects in the two classes proliferated.

  3. Eventually, the conjecture was either proved or disproved.

  4. Prior to the eventual solution, the two classes appeared to be separated by an "invisible fence," in the same sense that P and the NP-complete problems are. In other words: there were many results that, had they been slightly different (say, in some arbitrary-looking parameter), would have collapsed the two classes, but those results always stopped short of doing so.

To give a sense of what I have in mind, here are the best examples we've come up with so far (I'd say that they satisfy some of the conditions above, but probably not all of them):

  • David Speyer gave the example of Diophantine sets of integers versus recursively enumerable sets. The former was once believed to be a proper subset of the latter, but now we know they're the same.

  • Sam Hopkins gave the example of symplectic manifolds versus Kähler manifolds. The former contains the latter, but the containment was only proved to be strict by Thurston in the 1970s.

  • I gave the example of independence results in set theory. Until Cohen, there were many proven statements about transfinite sets, and then a whole class of other statements—V=L, GCH, CH, AC, Zorn's Lemma, well-orderability…—that were known to be interrelated by a web of implications (or equivalent, as in the last three cases), but had resisted all proof attempts. Only with forcing were the two classes "separated," an outcome that some (like Gödel) had correctly anticipated.

Best Answer

This isn't an exact analogue to P != NP, in which two large classes exist and it is undecided whether they are equal or not; instead, two large "universes" exist, of which only one is the truth, with one of them strongly believed to not exist, but for which all attempts to disprove this parallel universe have been defeated by an invisible fence. (Perhaps a complexity theory analogue would be a scenario in which we knew that of Impagliazzo's five worlds, only Algorithmica or Cryptomania were possible, but we could not determine which, with both worlds showing an equal propensity to "want to exist".)

Anyway, the situation is in analytic number theory, where there are two worlds (which, very roughly, would correspond to "Algorithmica" and "Cryptomania" in Impagliazzo's list):

  • Siegel zero: The primes conspire (i.e. show extremely anomalous correlation) with some multiplicative function, such as a Dirichlet character $\chi$; roughly speaking, this means that there is some modulus q such that there is a huge bias amongst the primes to be quadratic nonresidues mod q rather than quadratic residues. (Dirichlet's theorem tells us that the bias will die down eventually - for primes exponentially larger than q - but this is not useful in many applications). The most common way to describe this scenario is through a "Siegel zero" - a zero of an L-function that is really, really far away from the critical line (and really close to 1). Weirdly, such a conspiracy actually makes many number theory problems about the primes easier than harder, because one gets to "pretend" that the Mobius function is essentially a character. For instance, there is a cute result of Heath-Brown that if there are an infinite family of Siegel zeroes, then the twin prime conjecture is true. (Basically, the principle is that at most one conspiracy in number theory can be in force for any given universe; a Siegel zero conspiracy sucks up all the "conspiracy oxygen" for a twin prime conspiracy to also hold.) It does lead to some other weird behaviour though; for instance, the existence of a Siegel zero forces many of the zeroes of the Riemann zeta function to lie on the critical line and be almost in arithmetic progression.

  • Standard model: this is the universe which is believed to exist, in which the primes do not exhibit any special correlation with any other standard multiplicative function. In this world, GRH is believed to be true (and the zeroes should be distributed according to GUE, rather than in arithmetic progressions (this latter hypothesis has occasionally been called the "Alternative hypothesis")).

[This is an oversimplification; much as how complexity theorists have not ruled out the intermediate worlds between Algorithmica and Cryptomania, we don't have as strong of a separation between these two number-theoretic worlds as we would like. For instance there could conceivably be intermediate worlds where there are no Siegel zeroes, but GRH or GUE still fails (somewhat analogous to Impagliazzo's "Pessiland"). So in practice we have to weaken one or the other of these worlds, for instance by replacing GRH with a much weaker zero-free region. I'm glossing over these technical details though for this discussion. My feeling is that we have some chance with current technology of eliminating some more of these intermediate worlds, but we're quite stuck on eliminating either of the extreme worlds.]

In many different problems in analytic number theory, one has to split into two cases depending on which of the above universes is actually the one we live in, and use different arguments for each (which is the major source of the notorious ineffectivity phenomenon in analytic number theory, that many of out estimates involve completely ineffective constants, as they could depend on the conductor of a Siegel zero, for which we have no upper bound); finding a way around this dichotomy in even just one of these problems would be a huge breakthrough and would likely lead soon to the elimination of one of these universes from consideration. But there is an invisible fence that seems to block us from doing so; both universes exhibit a surprising amount of "self-consistency", suggesting that one could modify our mathematical universe very slightly one way or the other to "force" one of the two scenarios to be in effect (a bit like how one can force P to equal or not equal NP by relativising). The "parity barrier" in sieve theory is one big (and relatively visible) section of this fence, but this fence seems to be much longer and more substantial than just this parity barrier.

I discuss these things a bit more in this blog post. See also this survey of Conrey that was also linked to above.