The easiest way to show that the Ackermann is computable is to show that you only need a finite number of computation steps to compute the result.
Consider $\mathbb N^2$ with the usual lexicographic order :
$(a,b)<(c,d)$ iff $a<c$ or ($a=c$ and $b<d$)
This is a well-order. Hence all decreasing chains are finite.
Now, to compute $A(x+1,y+1)$ you need to compute $A(x+1,y)$ and $A(x,z)$ with $z=A(x+1,y)$. But $(x+1,y+1)>(x+1,y)$ and $(x+1,y+1)>(x,z)$, so you can only repeat this two steps a finite number of times (by the property of the well order).
The same for $A(x+1,0)$ because $(x+1,0)>(x,1)$ (Note that you could replace the $1$ in $(x,1)$ by anything, it would stay true).
So the computation must end in a finite number of steps and leads to a result.
$$\begin{align}A(2,2)&=A(1,A(2,1))\\
&=A(1,A(1,A(2,0)))\\
&=A(1,A(1,A(1,1)))\\
&=A(1,A(1,A(0,A(1,0))))\\
&=A(1,A(1,A(0,A(0,1))))\\
&=A(1,A(1,A(0,2)))\\
&=A(1,A(1,3))\\
&=A(1,A(0,A(1,2)))\\
&=A(1,A(0,A(0,A(1,1))))\\
&=A(1,A(0,A(0,A(0,A(1,0)))))\\
&=A(1,A(0,A(0,A(0,A(0,1)))))\\
&=A(1,A(0,A(0,A(0,2))))\\
&=A(1,A(0,A(0,3)))\\
&=A(1,A(0,4))\\
&=A(1,5)\\
&=A(0,A(1,4))\\
&=A(0,A(0,A(1,3)))\\
&=A(0,A(0,A(0,A(1,2))))\\
&=A(0,A(0,A(0,A(0,A(1,1)))))\\
&=A(0,A(0,A(0,A(0,A(0,A(1,0))))))\\
&=A(0,A(0,A(0,A(0,A(0,A(0,1))))))\\
&=A(0,A(0,A(0,A(0,A(0,2)))))\\
&=A(0,A(0,A(0,A(0,3))))\\
&=A(0,A(0,A(0,4)))\\
&=A(0,A(0,5))\\
&=A(0,6)\\
&=7\end{align}$$
Note that it is a general method : For any total function, it is recursive iff you can find a well-order and sequences of decreasing steps that lead to the result. (Note that several sequences will make a tree, like Ackermann, but finite branching-finite path tree is finite)
A function is recursive primitive iff the order needed to prove the induction is at most $\omega$.
Best Answer
You seem to be conflating primitive recursive functions with total recursive functions. Actually, primitive recursive are a subset of total recursive functions. The hierarchy can be described informally as follows:
More precisely (though I'll refer you to Wikipedia or some other reference for a complete definition), a primitive recursive function can be computed by a sequence of elementary computations where there is a way to bound the running time at every step. The only form of recursion allowed in the definition is primitive recursion, where a function calls itself on a smaller argument. All primitive recursions can be reduced to giving definitions for $f(0)$ (not involving $f$) and $f(n+1)$ as a function of $f(n)$ only.
General recursive definitions allow an extra operation: looking for the smallest $n$ that makes some property come true. In general, it's impossible to know how far you'll have to go to find such an $n$.
The definition of the Ackermann function contains the clause $A(m+1,n+1) = A(m, A(m+1, n))$. The “next” value of the function (going from $n$ to $n+1$) depends on calls with larger parameters (starting with $A(m+1,n)$). So the definition is not in primitive recursive form. It takes some work which I am not going to reproduce here, but you can show that there is no alternate presentation of the Ackermann function that uses only primitive recursion. Yet the function is well-defined, because computing $A(?,n)$ only requires computations of $A(?,n')$ with $n' < n$.
If you prefer a programming approach, recursive functions can be computed with dynamic memory allocation and while loops. Primitive recursive functions can be computed with only for loops, where the loop bound may be computed at run time but must be known when the loop starts.