Relationship between Post’s theorem and minimization

computabilitylogic

I am confused about the relationship between the quantifier complexity of formulae in the language of arithmetic and the computational classification of the sets those formulae define.

Specifically, my question can be boiled down to asking whether or not adding the minimization operator $μ$ to a primitive recursive formula always produces a $Δ^0_1$, total computable function. If this is the case, what is the proof, what guarantees the minimization always returns a value? If this is not the case, how does this not violate Post’s theorem?

What is confusing to me is that the minimization operator seems to perfectly correspond to an unbounded existential quantifier in my mind, i.e. they both are recursively enumerable but not recursive. If the number we’re looking for exists, an unbounded search will find it after a finite time. But if the number doesn’t exist, we will search forever. The minimization operator is often introduced in the context of loops. $Δ^0_1$ formulae have bounded quantifiers so they are clearly analogous to for loops, while the minimization operator is analogous to a while loop. But while loops are not guaranteed to halt (“while false, loop”), so how does fixing a minimization operator to a primitive recursive formula always produce a $Δ^0_1$ formula? If it doesn’t always, then the general recursive functions are not closed under minimization, which I thought was the point of the formalism, and we have general recursive functions which are $Σ^0_1$ which contradicts both my understanding of what Post’s theorem says and the definition of general recursive.

My only idea of how this could be solved is that, given Kleene’s normal form theorem, we know that we only need one use of minimization, which means we can always make the formula the minimization is attached to primitive recursive, and that somehow guarantees that the value the minimization is searching for will exist so it won’t loop forever, but I don’t know why that would be true if it is true. I guess the other option is that I do not understand Post’s theorem, or “general recursive/computable”. I always thought “general recursive/computable” meant the total functions closed under the primitive recursive functions and minimization, not the (non total) partial functions. As in, I thought the general recursive functions were the functions which are recursively enumerable and their complements are recursively enumerable, but not the functions which are recursively enumerable but whose complements are not recursively enumerable.

I know that I’m wrong about something, probably multiple things, but I cannot find an explicit answer to these questions in any textbook I have access to. What am I missing?

Best Answer

You're quite right that "the $\mu$-operator breaks $\Delta^0_1$-ness." As an explicit example, the formula $\theta(x,y)\equiv$ "the $x$th Turing machine with input $x$ halts within $y$ steps" is $\Delta^0_1$, but the unary relation $$H(x)\equiv \mbox{"}0\not=\mu y.\theta(x,y)\mbox{"}$$ is not $\Delta^0_1$.

However, this doesn't contradict Post's theorem. The mistake is the following:

I always thought “general recursive/computable” meant the total functions closed under the primitive recursive functions and minimization, not the (non total) partial functions.

That's not correct - (general) recursive functions are allowed to be partial. In particular, there is a recursive enumeration of the recursive functions, but there is no such enumeration of the total recursive functions.

Indeed, this is sort of the driving observation behind recursion theory. Remember that diagonalization tells us that we can never have an "effective" notion of "total effective function" - otherwise, the function $f$ given by $f(x) = 1+ t_x(x)$, where $t_x$ is the $x$th effective function according to this notion, would be simultaneously total effective and not total effective. Whenever we have a "concretely defined" class of total recursive functions, it falls short of being the whole class of total recursive functions.

By contrast, once we allow partiality this issue goes away. We can whip up a recursive enumeration $(\varphi_e)_{e\in\mathbb{N}}$ of the (partial) recursive functions (essentially, this is a universal Turing machine), and the function $$\theta(x)=1+\varphi_x(x)$$ is indeed recursive, but there's no contradiction: we have $\theta=\varphi_e$ for some $e$ such that $\varphi_e(e)$ is undefined. Our diagonalizing function $\theta$ is also therefore undefined at $e$, and so there's no issue. In fact, if we spend enough time thinking about this situation (and get a bit clever), we'll eventually arrive at the recursion theorem.

Of course, ease of use isn't a fully satisfying motivator for a shift in the notion of "fundamental object." But recursion theory is supposed to model algorithms, and the simple fact is that there are algorithmic procedures which don't halt, and - while we can easily check that a purported algorithm is "grammatically correct" - there is no good way to determine if an algorithm is going to halt on a given input. So in fact partial functions correspond better to what we care about than total functions.


Since you mention graphs: a (unary, for simplicity) recursive function can be identified with its graph $G_f:=\{(x,y): f(x)=y\}$, and a relation $R\subseteq\mathbb{N}\times\mathbb{N}$ is the graph of some recursive function iff:

  • $R$ is r.e., and

  • for each $x$ there is at most one $y$ such that $R(x,y)$.

If the function is total then its graph is recursive, but otherwise it's merely r.e. - this is a consequence of the second clause, since it means that we can compute $f(x)$ by searching for the unique $y$ such that $(x,y)\in G_f$. By contrast, if we drop the second clause this breaks: a total r.e. relation need not be recursive.

Related Question