[Math] Completeness, easiest, hardest problems

computability-theorycomputational complexity

One says that a language $L$ is complete for a complexity class $\mathcal{C}$ if $L$ is in $\mathcal{C}$ and every language in $\mathcal{C}$ is reducible to $L$. Thus, in a sense, $L$ is the hardest language in $\mathcal{C}$. For example, $3SAT$ is the hardest language in $\mathcal{NP}$.

Now recall Rice's theorem: any nontrivial property of a language is undecidable. The proof of this theorem essentially rests on the fact that if any such property were decidable, then it could be used to decide the $HALTING$ language. Thus, in a sense, $HALTING$ is the easiest property of all properties.

And then here is a peculiar thing: the proof of Rice's theorem—at least the one I've seen—amounts to constructing a reduction from $HALTING$ to every property. Contrast this with the definition of completeness, and one realizes that these are “opposites'' of each other in a sense.

Question: Is there more to this observation? Is there work done on this? If there are experts in the audience, could you elaborate on my observation? Does one ever consider “easiest'' problems in complexity theory as in my example above, and if so, does it lead to interesting results (like it does to Rice's theorem above)?

Thank you in advance for your comments and your effort in reading and thinking my question.

Rev. 1:

Note that when I say $HALTING$ is the easiest property, I don't mean to imply that it is easy. Of course, it is an undecidable language. It is “easy'' in a relative sense: every other property is at least as hard as $HALTING$, because if any other property can be decided, then it yields a decider for $HALTING$.

When we talk of an complete problem, we use similar terminology: if a language has a reduction from every other language in its class, then we call it complete and rightly refer to it as being one of the hardest languages in the class.

This is the subtle similarity / difference I am trying to point out. These two are in a sense opposites. In one case, we get reductions from all languages in the class. In the other, we get reductions from one language to all other languages in the class. In the first case, we call the target hardest. In the second, we (could) call the source the easiest.

I hope that this provides a clearer context for my question. Thanks.

Best Answer

One of the most common and powerful means of showing that a decision problem is undecidable is, as in the case you mention, to show that the halting problem reduces to it. For example, this is the basis of undecidability for almost all of the answers listed in the MO question asking for attractive examples of undecidable problems.

But sometimes people are led astray by this fact, and begin erroneously to believe that a problem $A$ is undecidable if and only if the halting problem reduces to it. That is, to use your terminology, they are led to believe that the halting problem is the easiest undecidable problem. One sometimes sees this error among mathematicians, even very good ones, who first begin to consider decidability issues in their domain. The error is to assume that because undecidability is often proved by reducing the halting problem, that this is the only way undecidability arises. But this isn't true.

This issue and its resolution are central in the history of computability theory, connected with its birth and maturation as a subject. Specifically, Emil Post asked what is now known as Post's Problem, whether there are any computably enumerable Turing degress strictly between the decidable ones and the halting problem. To use the jump notation, Post's problem asks whether there is a computably enumerable set $d$ with $0\lt_T d \lt_T 0'$. One could view Post's problem in your terms: he is asking whether the halting problem is easiest among all c.e. problems. A related version would ask whether it is the easiest undecidable problem of all.

The answer was given by Friedberg and Muchnik, who invented the priority argument method specifically to solve it, a method that has become fundamental, used in thousands of constructions in computability theory in order to reveal the nature and structure of the Turing degrees. The answer is that there are many such intermediate degrees! Indeed, there is a rich structure of c.e. degrees strictly below the halting problem. In fact, every countable partial order embeds into the hierarchy of degrees below the halting problem.

Thus, it is not correct to say that HALTING is the easiest undecidable problem. Any of the intermediate sets constructed by the priority method are strictly easier than HALTING, but still undecidable.

This is not to dispute the technical claim you made about Rice's theorem, but only your interpretation of it. The properties you consider are not actually languages, but sets of languages, or rather, sets of Turing machine programs that respect the equivalence of deciding the same set. Thus, your properties are just a special case of the collection of all possible languages. The reason the halting problem reduces to these particular sets has to do with the fact that the problem of deciding whether two programs compute the same set does reduce the halting problem. Indeed, this equivalence relation, making two programs equivalent when they compute the same function, is strictly harder than the halting problem; I believe it is a complete $\Pi^0_2$ set, which would make it equivalent to the double jump $O''$.

Related Question