[Math] Is P=NP relevant to finding proofs of everyday mathematical propositions

computational complexitylo.logic

Disclaimer: I don't know a whole lot about complexity theory beyond, say, a good undergrad class.

With increasing frequency I seem to be encountering claims by complexity theorists that, in the unlikely event that P=NP were proved and an algorithm with reasonable constants found, mathematicians wouldn't bother trying to prove things anymore beause we could just use our P-time algorithm to search for proofs. Usually this is part of an argument for why all mathematicians and logicians should care a lot about P=?=NP.

I think most of these claims are exaggerations of the first full paragraph on page 8 of the Cook's problem description for the Clay Institute (which itself is stated in a completely reasonable and unexaggerated manner).

However, it's quite clear from the Clay Institute description that P=NP is relevant only to classes of problems, parameterized by some integer $n$, for which we have already proved all three of the following:

  1. the question is not independent of our chosen axioms ($T\vdash \phi\vee T\vdash \neg\phi$)
  2. any proof of the proposition must have size at most polynomial in $n$
  3. any proof of the negation of the proposition must have size at most polynomial in $n$

This way we know there's a proof of either the proposition or its negation, and the search problem for the one that does exist falls inside NP, so we can dovetail the two searches and stop when one of them succeeds.

This puzzles me. Most of the propositions mathematicians care about don't come in integer-parameterized classes, let alone classes with known proof-size bounds. Usually they come in classes of size 1 with no knowledge of proof-size. Is there some trick for turning the sorts of results mathematicians care about into these integer-parameterized-polynomially-bounded classes?

Example: how would you do this for the question of whether or not CH is independent of ZFC?

Cook and Reckhow's JSL article The Relative Efficiency of Propositional Proof Systems (which seems to be the starting point for the literature) actually mentions that if you take the problem class to consist of all propositions in some proof system (such as first-order predicate calculus), take the length of the proposition as the parameter, and take the question to be "is it entailed by the axioms", then at the time the paper was published (1979) no real-world proof system was known to have the desired property, and a few were known not to have the desired property.

I suppose I am being slightly lazy here, since the study of which problems have this property is a whole subfield with plenty of literature I could read, but really I'm only interested in whether or not that subfield's positive-results-to-date justify the claims I've been hearing lately. A reference to a paper containing the "trick" above would be fine as an answer.

Best Answer

Let me address the issue at the beginning of the original question: If P=NP were proved and an algorithm with reasonable constants found, would mathematicians stop trying to prove things? The relevant NP set in this situation seems to be the $L_1$ of Ryan Williams's answer, which I regard (or decode) as the set of pairs consisting of a proposition $P$ to be proved and an upper bound $n$, written in unary notation, for the proof length. If we had a polynomial time algorithm for this NP set, then I could apply it as follows. Take $P$ to be some proposition that I'm tempted to work on, and take $n$ to be larger than any proof that I'd have time to write out in my life. If the algorithm, applied to these inputs, says "no" then I shouldn't work on this problem, because any proof would be too long for me to write out. If the algorithm says "yes" then I still shouldn't work on the problem because a P-time algorithm for Ryan's $L_2$ could find the proof for me. All of this, however, depends on an extremely optimistic understanding of "reasonable constants". The $n$ I chose is (I hope) rather big, so even a quadratic-time algorithm (with a small coefficient on the quadratic term) could take a long time (longer than my lifetime).

The bottom line is that, if P=NP were proved with plausible constants in the running time, it would not be foolish for me to keep trying to prove theorems. (Even if it were foolish, I'd keep trying anyway, partly because it's fun and partly because people might like my proof better than the lexicographically first one.)

By the way, the system in which proofs are done should, for these purposes, not be simply an axiomatic system like ZFC with its traditional axioms and underlying logic. It should be a system that allows you to formally introduce definitions. In fact, it should closely approximate what mathematicians actually write. The reason is that, although I'm looking only for proofs short enough to write in my lifetime, that doesn't mean proofs short enough to write in primitive ZFC notation in my lifetime. I believe some (if not all) of the proofs I've published would, if written in primitive ZFC notation, be too long for a lifetime.

Related Question