[Math] Ackermann’s function is $\mu$-recursive

ackermann-functioncomputabilitydiscrete mathematicsrecursion

In my book there is the following proof that Ackermann's function is $\mu$-recursive:

We propose to show that Ackermann's funcition is $\mu$-recursive. The first part of the job is to devise a scheme whereby each step of an evaluation of Acerkmann's function can be represented by a single natural number. We will then show that the numbers associated with successive steps in an evaluation can be obtained from a primitive recursive function and that the result of the evaluation can be extracted by a suitably defined $\mu$-recursive function. We begin with the problem of representing evaluations numerically.

Acekrmann's function is defined by three rules:

Rule 1: $A(0, y)=y+1$

Rule 2: $A(x+1, 0)=A(x, 1)$

Rule 3: $A(x+1, y+1)=A(x, A(x+1, y))$

Three facts are readily apparent from a study of Rules $1$ to $3$:

First, each step in the evaluation of $A(x, y)$ results either in a single natural number or else in a nested expression of the form $$A(w_1, A(w_2, A(w_3, \dots A(w_{k-1}, w_k) \dots )))$$

Second, exactly one of the three rules is applicable at any given step.

And third, no matter which rule applies, it must always be applied to the rightmost $A$ in the given expression-i.e., it must always be applied at the innermost level of the nest.

Perhaps less obvious than these observations is the fact that the evaluation procedure just outlined must always terminate after a finite number of steps.

Lemma 1. For each choice of natural numbers $x$ and $y$, the expression $A(x, y)$ can be reduced to a single natural number by means of a finite number of applications of Rules $1$, $2$, and $3$.

Let us now devise a method of representing the steps of an evaluation numerically. Remember that each step consists of an expression of the form $$A(w_1, A(w_2, \dots , A(w_{k-1}, w_k) \dots ))$$

Since the terms in such an expression are always associated to the right, most of the punctuation can be oitted without loss of information. In particular, the xression $A(w_1, A(w_2, \dots , A(w_{k-1}, w_k) \dots ))$ can b unambiguously represented by the $k$-tuple $$(w_1, \dots , w_{k-1}, w_k)$$

Thus each step in the evaluation of Ackermann's function can be described by a tuple of natural numbers.

We next use a Gödel-numbering scheme to reduce the description of each step in an evaluation to a single natural number. In particular, we choose to represent the tuple $(w_1, \dots , w_k)$ by the natural number $$2^k 3^{w_1} \cdots p_k^{w_k}$$

Note the length of the tuple is encoded as the exponent of two, while the components of the tuple are encoded as the exponents of the succeding primes. This provides a single way of telling how many components to "look for" in a given Gödel number.

This scheme allows us to describe the evaluation of Ackermann's function with a sequence of natural numbers. Such a numbering scheme is referred to as an arithmetization of the evaluation of Ackermann's function.

Note that the steps of the evaluation are numbered, starting with zero, and the zeroth step correpsonds to the expression $A(x, y)$ whose value is to be determined.

We now define the three-variable function $\psi$ so that $\psi(x, y, n)$ is theGödel number of the nth step in the evaluation of $A(x, y)$. In orrder to make $\psi$ a total function, we agree to "extend" each evaluation of Ackermann's function as follows. If the actual evaluation of $A(x, y)$ terminates at step $n_0$ and this step is assigned the Gödel number $z$, we simply set $\psi(x,y,n)=z$ for all $n \geq n_0$. The resulting function $\psi$ will be referred to as the trace function for the evaluation of $A$.

One next job is to sho wthta $\psi$ is a primitive recursive function. For this purpose it is conveninet to be able to view every natural number $z$ as being a Gödel number of some tuple. The simplest approach is to ignore any "extra" primes in the decomposition of $z$. Thus the natural number $$2^k3^{w_1}5^{w_2} \cdots p_k^{w_k}p_{k+1}^{w_{k+1}} \cdots p_m^{w_m}$$ will be treated as if it were the Gödel number of the $k$-tuple $(w_1, \dots , w_k)$.

With this convention in mind, we now assign an auxiliary predicate and an auxiliary function to each of the rules defining Ackermann's function. For each of the values $i=1$, $i=2$, and $i=3$, lewt $Q_i$ denote the predicate such that:

$$Q_i(z) \Leftrightarrow z \text{ is a Gödel number of a tuple to which Ruile } i \text{ applies }$$

And let $h_i$ denote a function such that, if $Q_i(z)$ holds, then:

$$h_i(z) \text{ is the Gödel number of the tuple that results when Rule }$$

$$i \text{ is applied to the tuple represented by the number } z$$

If $Q_i(z)$ does not hold-i.e., if $z$ does not represnet a tuple to which Rule $i$ applies, the value assumed by $h_i(z)$ is unimportant. (We will in fact choose these "default values" so as to ensure the primitive recursiveness of $h_i$.)

The predicates $Q_1, Q_2, Q_3$ and the functions $h1, h_2,h_3$ provide an easy way of specifying the trace function $\psi$. Since $\psi(x, y, 0)$ is just the Gödel number of the tuple $(x, y)$, we have: $$\psi(x, y, 0)=2^23^x5^y$$

And since $
\psi(x, y, n+1)$ can be obtained from $\psi(x, y, n)$ by determining whiich rule applies to the tuple represented by $\psi(x, y, n)$ and then computing the effect of applying that rule, we can write:

$$\psi(x, y, n+1)=\begin{cases} h_1(\psi(x, y, n)) & \text{ if } Q_1(\psi(x, y, n)) \\ h_2(\psi(x, y, n)) & \text{ if } Q_2(\psi(x, y, n)) \\ h_3(\psi(x, y, n)) & \text{ if } Q_3(\psi(x, y, n)) \\ \psi(x, y, n) & \text{ otherwise} \end{cases}$$

Thus to show that $\psi$ is primitive recursive, it is only necessary to show that the predicates $Q_1, Q_2, Q_3$ and the functions $h_1, h_2, h_3$ are primtive recursive.

To this end, we introduce one more auxiliary function $L$, whose values for argument $z$ is the length of the tuple represented by $z$. Since $$L(z)=E(0, z)$$ the function $L$ is certainly primitve recursive.

Now, the predicate $Q_1(z)$ is to hold iff $z$ represnets a tuple containing at least two components, of which the next-to-last is zero. Thus we can write $$Q_1(z) \Leftrightarrow (L(z)>1) \wedge (E(L(z)\overset{\cdot }{-}1, z)=0)$$

Similar reasoning yields the following definitions of $Q_2$ and $Q_3$, as the reader may verify.

$$Q_2(z) \Leftrightarrow (L(z)>1) \wedge (E(L(z)\overset{\cdot}{-}1, z)\neq 0) \wedge (E(L(z), z)=0) \\ Q_3(z) \Leftrightarrow (L(z)>1) \wedge (E(L(z)\overset{\cdot}{-}1, z)\neq 0)\^ (E(L(z), z) \neq 0)$$

Thus the predicates $Q_1$, $Q_2$, and $Q_3$ are all primitive recursive.

The functions $h_1$, $h_2$. and $h_3$ are only slightly more difficult to deal with. First consider the case in which $z$ represents a tuple to which Rule $1$ applies. That is, suppose that the prime decomposition of $z$ has the form: $$z=2^k3^{w_1} \cdots p_{k-2}^{w_{k-2}}p_{k-1}^0p_k^{w_k} \cdots p_m^{w_m}$$

Inspection of Rule $1$ reveals that in ythis case the appropriate value of $h_1(z)$ is: $$2^{k-1}3^{w_1} \cdots p_{k-2}^{w_{k-2}}p_{k-1}^{w_{k+1}}$$

We are therefore led to define $h_1$ as follows:

$$h_1(z)=2^{L(z)\overset{\cdot}{-}1} \left [ \prod_{i=1}^{L(z)\overset{\cdot}{-}2}pn(i)^{E(i, z)}\right ] pn(L(z)\overset{\cdot}{-}1)^{E(L(z), z)+1}$$

The choice of this definition ensures that $h_1$ is primitive recursive and that $h_1(z)$ assumes the required values whenever $Q_1(z)$ holds. (Rememver that it does not matter what value $h_1(z)$ assumes when $Q_1(z)$ does not hold.)

Similar observations concerning Rules $2$ and $3$ lead to the following definitions of the functions $h_2$ and $h_3$.

$$h_2(z)= \left [ \prod_{i=1}^{L(z)\overset{\cdot}{-}2}pn(i)^{E(i, z)}\right ] pn(L(z)\overset{\cdot}{-}1)^{E(L(z)\overset{\cdot}{-}1, z)\overset{\cdot }{-}1}pn(L(z)) \\ h_3(z)=\dots $$

Thus the desired functions $h_1$, $h_2$, and $h_3$, like the predicates $Q_1$, $Q_2$, and $Q_3$, are primitive recursive. And this in urn implies that the trace function $\psi$ is primitive rcursive.

We are now ready to show that Ackermann's function is $\mu$-recursive. Fr this purpose we define the two-variable function $\eta$ so that $\eta(x, y)$ is te number of steps needed to evaluate $A(x, y)$. Evidently $\eta(x, y)$ is the smallest value of $n$ for which $\psi(x, y, n)$ represents a single natural number. Thus: $$\eta (x, y) =\mu n(L(\psi(x, y, n))=1]$$ Since $L$ and $\psi$ aare primitive recursive, it follows that $\eta$ is a $\mu$-recursive function. Next note that the outcome of the evaluation of $A(x, y)$ is just the value of the natural number represented by $\psi(x, y, \eta(x, y))$. We may therefore write $$A(x, y)=E(1, \psi(x, y, \eta (x, y)))$$ Since $E$ and $\psi$ are primitive recursive and $\eta$is $\mu$-recursive, we have the finally established:

Theorem:

Ackermann's function is $\mu$-recursive.

$$$$

Could you explain to the idea of the proof?? I haven't understood
it… Why do we need the functions $h_i$, $\psi$, and so on to show
that Ackermann's function is $\mu$-recursive??

Best Answer

The easiest way to show that the Ackermann is computable is to show that you only need a finite number of computation steps to compute the result.

Consider $\mathbb N^2$ with the usual lexicographic order :

$(a,b)<(c,d)$ iff $a<c$ or ($a=c$ and $b<d$)

This is a well-order. Hence all decreasing chains are finite.

Now, to compute $A(x+1,y+1)$ you need to compute $A(x+1,y)$ and $A(x,z)$ with $z=A(x+1,y)$. But $(x+1,y+1)>(x+1,y)$ and $(x+1,y+1)>(x,z)$, so you can only repeat this two steps a finite number of times (by the property of the well order).

The same for $A(x+1,0)$ because $(x+1,0)>(x,1)$ (Note that you could replace the $1$ in $(x,1)$ by anything, it would stay true).

So the computation must end in a finite number of steps and leads to a result.

$$\begin{align}A(2,2)&=A(1,A(2,1))\\ &=A(1,A(1,A(2,0)))\\ &=A(1,A(1,A(1,1)))\\ &=A(1,A(1,A(0,A(1,0))))\\ &=A(1,A(1,A(0,A(0,1))))\\ &=A(1,A(1,A(0,2)))\\ &=A(1,A(1,3))\\ &=A(1,A(0,A(1,2)))\\ &=A(1,A(0,A(0,A(1,1))))\\ &=A(1,A(0,A(0,A(0,A(1,0)))))\\ &=A(1,A(0,A(0,A(0,A(0,1)))))\\ &=A(1,A(0,A(0,A(0,2))))\\ &=A(1,A(0,A(0,3)))\\ &=A(1,A(0,4))\\ &=A(1,5)\\ &=A(0,A(1,4))\\ &=A(0,A(0,A(1,3)))\\ &=A(0,A(0,A(0,A(1,2))))\\ &=A(0,A(0,A(0,A(0,A(1,1)))))\\ &=A(0,A(0,A(0,A(0,A(0,A(1,0))))))\\ &=A(0,A(0,A(0,A(0,A(0,A(0,1))))))\\ &=A(0,A(0,A(0,A(0,A(0,2)))))\\ &=A(0,A(0,A(0,A(0,3))))\\ &=A(0,A(0,A(0,4)))\\ &=A(0,A(0,5))\\ &=A(0,6)\\ &=7\end{align}$$

Note that it is a general method : For any total function, it is recursive iff you can find a well-order and sequences of decreasing steps that lead to the result. (Note that several sequences will make a tree, like Ackermann, but finite branching-finite path tree is finite)

A function is recursive primitive iff the order needed to prove the induction is at most $\omega$.

Related Question