I think your question is not as precise as you portray it.
First, let me point out that you have not actually defined
a function $z$, in the sense of giving a first order
definition of it in set theory, and you provably cannot do
so, because of Tarski's theorem on the non-definability of
truth. We simply have no way to express the relation x is
definable in the usual first-order language of set theory.
More specifically:
Theorem. If ZFC is consistent, then there are models
of ZFC in which the collection of definable natural numbers
is not a set or even a class.
Proof. If V is a model of ZFC, then let $M$ be an internal
ultrapower of $V$ by a nonprincipal ultrafilter on
$\omega$. Thus, the natural numbers of $M$ are nonstandard
relative to $V$. The definable elements of $M$ are all
contained within the range of the ultrapower map, which in
the natural numbers is a bounded part of the natural
numbers of $M$. Thus, $M$ cannot have this collection of
objects as a set or class, since it would reveal to $M$
that its natural numbers are ill-founded, contradicting
that $M$ satisfies ZFC. QED
In such a model, your definition of $z$ is not first order.
It could make sense to treat your function $z$, however, in
a model as an externally defined function, defined outside
the model (as via second-order logic). In this case, $z(n)$
only involves standard or meta-theoretic definitions, and
other problems arise.
Theorem. If ZFC is consistent, then there is a model
of ZFC in which $z(n)$ is bounded by a constant function.
Proof. If ZFC is consistent, then so is $ZFC+\neg
Con(ZFC)$. Let $V$ be any model of this theory, so that
there are no models of ZFC there, and the second part of
the definition of $z$ becomes vacuous, so it reduces to its
definable-in-$V$ first part. Let $M$ be an internal
ultrapower of $V$ by an ultrafilter on $\omega$. Thus, $M$
is nonstandard relative to $V$. But the function $z$,
defined externally, uses only standard definitions, and the
definable elements of $M$ all lie in the range of the
ultrapower map. If $N$ is any $V$-nonstandard element of
$M$, then every definable element of $M$ is below $N$, and
so $z(n)\lt N$ for every $n$ in $M$. QED
Theorem. If ZFC is consistent, then there is a model
of ZFC in which $f(n)\lt z(10000)$ for every natural number
n in the meta-theory.
Proof. If ZFC is consistent, then so is $ZFC+\neg
Con(ZFC)+GCH$. Let $V$ be a countable model of $ZFC+\neg
Con(ZFC)+GCH$. Since $V$ has no models of ZFC, again the
second part of your definition is vacuous, and it reduces
just to the definability-in-$V$ part. Let $M$ again be an
internal ultrapower of $V$ by an ultrafilter on $\omega$,
and let $N$ be a $V$-nonstandard natural number of $M$.
Every definable element of $M$ is in the range of the
ultrapower map, and therefore below $N$. In particular, for
every meta-theoretic natural number $n$, we have $f(n)\lt
N$ in $M$, since $f(n)$ is definable. Now, let $M[G]$ be a
forcing extension in which the continuum has size
$\aleph_N^M$. Thus, $N$ is definable in $M[G]$ by a
relatively short formula; let's say 10000 symbols (but I
didn't count). Since the forcing does not affect the
existence of ZFC models or Turing computations between $M$
and $M[G]$, it follows that $f(n)\lt z(10000)$ in $M[G]$
for any natural number of $V$. QED
Theorem. If ZFC is consistent, then there is a model
of ZFC with a natural number constant $c$ in which $z(n)\lt
f(c)$ for all meta-theoretic natural numbers $n$.
Proof. Use the model $M$ (or $M[G]$) as above. This time,
let $c$ be any $V$-nonstandard natural number of $M$. Since
the definable elements of $M$ all lie in the range of the
ultrapower map, it follows that every z(n), for
meta-theoretic $n$, is included in the $V$-standard
elements of $M$, which are all less than $c$. But $M$
easily has $c\leq f(c)$, and so $z(n)\lt f(c)$ for all
these $n$. QED
Two weeks ago a conference was held on precisely the topic
of your question, the Workshop on Set Theory and the
Philosophy of
Mathematics
at the University of Pennsylvania in Philadelphia. The
conference description was:
Hilbert, in his celebrated address to the Second International
Congress of Mathematicians held at Paris in 1900, expressed the
view that all mathematical problems are solvable by the application
of pure reason. At that time, he could not have anticipated the fate
that awaited the first two problems on his list of twenty-three,
namely, Cantor's Continuum Hypothesis and the problem of the consistency
of an axiom system adequate to develop real analysis. The Gödel
Incompleteness Theorems and the Gödel-Cohen demonstration of the
independence of CH from ZFC make clear that continued confidence in
the unrestricted scope of pure reason in application to mathematics
cannot be founded on trust in its power to squeeze the utmost from
settled axiomatic theories which are constitutive of their respective
domains. The goal of our Workshop is to consider the extent to which it
may be possible to frame new axioms for set theory that
both settle the Continuum Hypothesis and satisfy reasonable
standards of justification. The recent success of set
theorists in establishing deep connections between large
cardinal hypotheses and hypotheses of definable determinacy
suggests that it is possible to find rational justification
for new axioms that far outstrip the evident truths about
the cumulative hierarchy of sets, first codified by Zermelo
and later supplemented and refined by others, in their
power to settle questions about real analysis. The Workshop
will focus on both the exploration of promising
mathematical developments and on philosophical analysis of
the nature of rational justification in the context of set
theory.
Speakers included Hugh Woodin, Justin Moore, John Burgess,
Aki Kanamori, Tony Martin, Juliette Kennedy, Harvey
Friedman, Andreas Blass, Peter Koellner, John Steel, James
Cummings, Kai Hauser and myself. Bob Solovay also attended.
Several speakers have made their slides available on the
conference page, and I believe that they are organizing a
conference proceedings volume.
Without going into any details, let me say merely that in
my own talk (slides
here) I argued
against the position that there should be a unique theory
as in your question, by outlining the case for a multiverse
view in set theory, the view that we have multiple distinct
concepts of set, each giving rise to its own set-theoretic
universe. Thus, the concept of set has shattered into
myriad distinct set concepts, much as the ancient concepts
of geometry shattered with the discovery of non-euclidean
geometry and the rise of a modern geometrical perspective.
On the multiverse view, the CH question is a settled
question---we understand in a very deep way that the CH and
$\neg$CH are both dense in the multiverse, in the sense
that we can easily obtain either one in a forcing extension
of any given universe, while also controlling other
set-theoretic phenomenon. I also gave an argument for why
the traditionally proposed template for settling CH---where
one finds a new natural axiom that implies CH or that
implies $\neg$CH---is impossible.
Meanwhile, other speakers gave arguments closer to the
position that you seem to favor in your question. In
particular, Woodin described his vision for the Ultimate L,
and you can see his slides.
Best Answer
In the field of computability theory, the terms "decidable set", "computable set", and "recursive set" are all formally defined and they all mean the same thing. So, to put it gently, Wells is misusing terminology.
There is an enormous amount of evidence that the formally-defined class of computable functions on the natural numbers is exactly the same as the informally-defined class of effectively calculable functions: the ones for which there is an effective procedure that could, in principle, be carried out by a human with unlimited time, resources, and patience. That is, there is an enormous amount of evidence that the Church-Turing thesis is true in the form "a function is effectively calculable (in the informal sense) if and only if it is computable (in the formal sense)."
Regarding Wells' invocation of Tarski, we'll never really know what Tarski had in mind, because Tarski died within a year of the conversation Wells describes. But Wells' argument that an entire field should redefine the formal term "decidable" based on an off-the-cuff discussion with Tarski is not compelling.
There is a research area known as "hypercomputation" that studies various models of computation that go beyond things computable by Turing machines. The reason that this work is not viewed as evidence against the Church-Turing thesis (as stated above) is that these models don't possess the essential property of being able to be carried out by a patient human using unlimited paper, pencil, and time.
That kind of effectiveness is the heart of Turing computability, and the reason that we use the word "computable" to refer to the Turing computable functions rather than some broader or narrower class of functions.