[Math] The Halting Problem and Church’s Thesis

computability-theorylo.logicproof-theory

In the opening chapters of Hartley Rogers, Jr.'s book Theory of Recursive Functions and Effective Computability, the proofs of the unsolvability of the halting problem and related unsolvability results invoke Church's Thesis. Can Church's Thesis be avoided?

Edit added on September 7 while putting a bounty on the question:

While in no way whatsoever doubting the diagonal argument per se, I am nevertheless in doubt as to how this is meant to work in the undecidability set up as witnessed also by my probes. One $\textit{evasion-possibility}$ that I, given my befuddlements, am not able to exclude a priori is that there $\textit{is}$ a recursive function that tells us whether a given Turing machine halts, but that a diagonalisation of the Turing machine would – pace Church and Turing – provide an effective non-recursive method beyond that recursive function.

I am rather certain that the evasion-possibility does not obtain, but I want to understand precisely why it does not. Somehow, I would have thought that the answer to this were simple. (The last sentence was not meant to contradict or undermine comments that pointed towards a tediousness of carrying these matters through.)

Best Answer

Let me point out that there are really a family of Church-Turing theses assertions.

On the one hand, for what is sometimes described as the weak Church-Turing thesis, one imagines an idealized human agent, not constrained by resources of time or memory (or supplies of paper and pencil), but carrying out the kind of idealized computation that Turing had described — using paper and pencil calculations according to certain kinds of formal rules — and the claim is that any such algorithm that could in principle be carried out by such an idealized human agent can in fact by carried out by a suitable Turing machine program. According to this version of the thesis, therefore, the Turing-machine account of computability has captured the correct notion of computability-in-principle for an idealized human agent.

I believe that it is the weak Church-Turing thesis that most mathematicians have in mind when speaking of the Church-Turing thesis. A convincingly large piece of evidence for at least this weak form of the Church-Turing thesis consists of the mathematical fact that all the various notions of formal computability, including Turing machines, modified and expanded Turing machines, such as multi-tape machines and expanded alphabet or instruction set Turing machines, but including also register machines, modified register machines, flowchart machines, machines based on idealized versions of the basic programming language, or C++ or whatever other computer language, and so on. All these various notions of formal computability have been proved equivalent — they can all simulate each other — and this makes us think that we have correctly captured the notion of computability-in-principle with any one of them.

And surely it is only the weak Church-Turing thesis that is used in the arguments that Rogers mentions. When one describes a computational procedure in the way that Rogers does, or anyone does in the entire field of computability theory, one is describing a computational procedure that could in principle be carried out by an idealized human agent using only paper and pencil with plenty of time and plenty of paper. So this is a use of the comparatively uncontested formulation of the Church-Turing thesis. As Andrej and the other commentators testify, there is nearly universal agreement on the accuracy of the weak Church-Turing thesis.

The point I want to emphasize, however, is that there is another stronger version of the thesis, the strong Church-Turing thesis, which asserts that not only are the idealized paper-and-pencil computational procedures all simulable by Turing machines, but also any algorithm procedure that we could in principle carry out in our physical universe, however strange, is simulable by Turing machines. This is a much stronger claim.

Frankly, the evidence for this stronger version of the Church-Turing thesis is considerably weaker, in light of the fact that we already know that the fundamental nature of physical reality, including various bizarre quantum effects as well as relativistic effects, such as time dilation, are quite bizarre. We don't actually have much reason to think that it should not be possible in principle to take advantage of them for computational effect.

The quantum Turing machines may be a familiar example. Although we already know a lot about quantum Turing machines, in fact these will not violate the strong Church-Turing thesis, since they are in principle simulable by ordinary Turing machines (although the simulation takes much longer time, so the new quantum machines may give rise to a new complexity theory, even though they give rise to the same computability theory). So there seems to be no hidden violation of the strong Church-Turing thesis arising there.

But meanwhile, there are other strange computational procedures such as black hole computation and others, which seem possibly to offer a way to violate the Church-Turing thesis, by taking advantage of the relativistic effects such as time dilation that occur in our actual physical world. See Philip Welch's article, The extent of computation in Malament-Hogarth spacetimes, for a great summary and mathematical analysis of this kind of thing.

Jack Copeland has particularly emphasized the distinction between the weak and the strong Church-Turing theses.