Let's suppose P = NP is independent (of ZFC). Then there is a model of ZFC in which there is a polynomial time algorithm for SAT. But suppose this algorithm is correct, wouldn't this algorithm exist in the standard model? In the end an algorithm is a number. My question is: How can it be that there exists a polynomial time algorithm for SAT in a model of ZFC and yet P = NP is unprovable? In other words, how can P = NP be independent?
[Math] Independence of P = NP
computational complexityset-theory
Related Solutions
Zirui Wang, if I understand your proof correctly, then it is wrong.
You've made the supposition "No SAT algorithm is bounded by $f$" (where $f$ is some super-polynomial function). From this is certainly follows that no SAT algorithm runs in polynomial time; however, it does not follow that this latter statement is provable (in ZFC or whatever). You've substituted the a priori knowledge that $f$ has the desired property for a proof that $f$ has the desired property, but it's the latter of these two that you actually need to make your argument work.
(The same sort of error was made, I believe, in the fallacious argument here: Provability of termination. Whats wrong with my reasoning? )
Similarly, the claim "If it's indeed independent, then all statements that imply P = NP are false and all statements that imply P ≠ NP are also false," is wrong -- if P = NP is independent, then all statements implying P = NP are either false or unprovable.
The short answer is that there are no implications.
As Scott Aaronson said on his blog, the fact that even a polynomial time algorithm for GI has no implications for complexity classes is one reason some people conjectured that GI might be in P. He continued:
In fact, the main implication for complexity theory, is that we’d have to stop using GI as our flagship example of a problem that has statistical zero-knowledge proof protocols and dozens of other amazing properties, despite not being known to be in P, which would make all those properties trivial. :-) It’s similar to how the main implication of PRIMES in P, was that we had to stop using primality as our flagship example of a problem that was known to be in randomized polynomial time but not deterministic polynomial time. Still, I managed to adapt in my undergrad class, by talking about the randomized test for primality, and then treating the fact that it’s actually in P as just a further amazing wrinkle in the story. And I’d be happy to adapt similarly in the case of GI…
Similarly there are no implications for quantum computing.
As for the prospects for further improvement, the best reference as of this writing may be Jeremy Kun's notes on Babai's talk. One key point is that Babai's algorithm relies on a strategy that he calls "split or Johnson" and he can show that this strategy is not enough to put GI in P.
UPDATE: Babai has posted a preprint to the ArXiv.
UPDATE: Babai's talk at the Institute for Advanced Study is available online. IMO this talk is much better than the very first talk he gave at Chicago; he's had time to prepare slides and greatly clarify the presentation.
Best Answer
There are examples such as the one due to Levin mentioned here which you can write down explicitly, but whose running time is polynomial if and only if P=NP. Thus in some [admittedly rather trivial] sense it's not finding an algorithm which is hard, but proving that it runs in polynomial time. This is the part which could conceivably be independent.