Proofs including computer checking for “small” numbers

computer scienceproof-theory

In number theory there are some proofs which given some bound prove that some statement holds for all natural numbers greater than some $N$. This $N$ can easily be large, say $10^{30}$. So one has to check manually that statement holds for all $n < N$. Naturally, you leave this for computer to check.

Now there are couple of things bothering me with this. So, you write an algorithm to check property for all $n < N$. How can you prove that the algorithm is correct? Now, I've been thinking about this, and I think I know the answer, but I'm putting this remark here in case someone finds it interesting. It's the same as saying "How do you prove your proof is correct?" So, this is not much of an issue because your algorithm usually has $100$ lines or so of code and you can convince yourself that it is correct.

However, there are some things that are concerning me. You compile your code on some machine, and thus, you are relying on the correctness of a lot more things than just your code. And proving correctness of compiler is not only utterly hopeless but also simply wrong because every now and then some bug is reported.

Now, proving correctness of compiler and OS, however much utterly hopeless it seems and wrong it is, at least it is theoretically possible. But, actually you are relying on more than that, you are relying on the physical components of computer to work properly. And this is, at least to my knowledge, impossible to prove correctness for.

For subjects that require less rigourous proofs than mathematics, this is just fine. But I'm wondering is there some mathematical approach to solve this, when dealing with problems such as the one stated at beginning? Is this even something we should care about?

My thoughts are, still better that then manually checking, but it twists my mind to put this kind of proof in the same basket as some proof not involving computer.

Best Answer

Basically as others have said, it is naive to think others haven't deeply cared about this and sought answers. Here are a list of known solutions to this percieved problem.

  1. Is it really a problem? Classifications problems are almost always wrong the first time, (especially when they aren't computer assisted as history shows with classification of groups and algebras). But somehow math has survived each correction. This suggests that problems that use case-by-case analysis are flexible to having more cases introduced -- a pain and embarrassing but routine. In fact errors in math are super common, many times it is as simple as misunderstanding notation or hypotheses. We find ways to correct it and move one. Why should a computer error be treated more harshly?
  2. The answer is probably true. Oodles of questions are solved probabilistically. Without it algorithms would take too long. Should those be trusted? They are specifically admitting the possibility of failure. Well yes, because they are extremely explicit about how they could fail and the probabilities are driven down to such small amounts that it would be more likely that the quantum effects made the electrons in the computer change to the wrong answer than that the computation was actually wrong.
  3. Can we certify the result? Perhaps most exciting is the concept of Las Vegas algorithms -- a name given by Babai to a class of randomized algorithms that guarantee their return. The way they do this is that they produce along with an answer a proof that it is correct, you can go check the proof. This can be as simple as saying "Not prime" by giving factors but far more exotic certificates now exist for remarkably subtle questions. Certificates are also used on deterministic algorithms. Run something, even if there is a bug, if your answer includes a proof, you are fine.
  4. Can the algorithm be type-checked? Then as already mentioned, if your logic is purely intuitionistic (no proofs by contradiction) then you can in fact model this completely soundly in proof checkers. The proofs written this way are as boring as a high school geometry quiz but they are proofs.
  5. Computer proofs are sometime more trust-worthy. There are many proofs I would believe from a computer before I would believe it from a person, say just about any solution to a linear algebra problem in more than 2 dimensions. So belief here is somewhat a subjective measure.