Consider the Peano axioms. There exists a model for them (namely, the natural numbers with a ordering relation $<$, binary function $+$, and constant term $0$). Therefore, by the model existence theorem, shouldn't this suffice to prove the consistency of first order arithmetic? Why is Gentzen's proof necessary?
[Math] Why is there a need for ordinal analysis
formal-prooflo.logicmodel-theoryordinal-analysispeano-arithmetic
Related Solutions
This is a refinement of the answer provided by Andres Caicedo.
For weak arithmetics, such as Robinson's $Q$, it is not bi-interpretability, but rather the weaker notion of mutual interpretability that turns out to be the "right" notion to study [See here for a thorough exposition by Harvey Friedman of various notions of interpretability].
It is known that $Q$ is mutually interpretable with a surprisingly weak set theory known as Adjunctive Set Theory, denoted $AS$, whose only axioms are the following two:
1. Empty Set: $\exists x\forall y\lnot (y\in x)$ 2. Adjunction: $\forall x\forall y\exists z\forall t(t\in z\leftrightarrow (t\in x\vee t=y)) $
The mutual interpretability of $Q$ and $AS$ is a refinement of a joint result of Szmielew and Tarski, who proved that $Q$ is interpretable in $AS$ plus Extensionality. This result was reproted without proof in the classic 1953 monograph Undecidable Theories of Tarski, Mostowski, and Robinson. A proof was published by Collins and Halpern in 1970. Later work in this area was made by Montagna and Mancini in 1994, and most recently by Albert Visser in 2009, whose paper below I recommend for references and a short history:.
A. Visser, Cardinal arithmetic in the style of Baron von Münchhausen, Rev. Symb. Log. 2 (2009), no. 3, 570–589
You can find a preprint of the paper here.
Note that since $Q$ is known to be essentially undecidable [i.e., every consistent extension of $Q$ is undecidable], the interpretability of $Q$ in $AS$ implies that AS is essentially undecidable.
In principle, the answer can depend on the proof system, but as long as you stick to some of the usual Hilbert-style or sequent proof systems, this shouldn’t matter.
First, as explained in https://mathoverflow.net/questions/120106, the question is equivalent to provability of the consistency of FPA in $I\Delta_0+\Omega_1$. (The fact that you are using second order objects to encode proofs and formulas corresponds to using all numbers instead of just the logarithmically small ones in $I\Delta_0+\Omega_1$, hence you end up with the usual consistency statement.)
Now, working in $I\Delta_0+\Omega_1$ (or equivalently, Buss’s $S_2$), the consistency of FPA is equivalent to the consistency of the second-order theory of the model with one-element first-order universe (in whatever finite language, it’s all equivalent), since the two theories are interpretable in each other. This in turn can be reduced to the quantified propositional calculus: since there is only one first-order element (and only one $n$-tuple of elements for every $n$), you can ignore first-order quantifiers and variables, and replace second-order variables with propositional variables both in second-order quantifiers and in atomic formulas. (Purely first-order atomic formulas such as $t=s$ can be replaced with the constant $\top$ for truth.) Thus, the question becomes whether $I\Delta_0+\Omega_1$ proves the consistency of the quantified propositional calculus ($G$).
The answer is that this is one of the major open problems in the area, but it is conjectured to be false. There is a kind of correspondence of subsystems of bounded arithmetic to propositional proof systems; in particular, the fragments $T^i_2$ of $S_2$ (${}=I\Delta_0+\Omega_1$) correspond to the fragments $G_i$ of the quantified propositional calculus, obtained by restricting all formulas in the proof (or alternatively, all cut formulas in the sequent calculus formulation) to $\Sigma^q_i$ or $\Pi^q_i$ formulas (= formulas in prenex form with at most $i$ quantifier blocks). This means:
$T^i_2$ proves the consistency (and even some form of reflection principle) of $G_i$.
Conversely, $\mathrm{Con}_{G_i}$ implies over a weak base theory all $\forall\Delta^b_1$-consequences of $T^i_2$ (and more complex consequences of $T^i_2$ can be axiomatized by an appropriate reflection principle). A related fact is that if $T^i_2$ proves a $\forall\Sigma^b_i$ statement, one can translate it into a sequence of quantified propositional tautologies which will have polynomially bounded proofs in $T^i_2$.
If $P$ is any propositional proof system whose consistency is provable in $T^i_2$, then $G_i$ polynomially simulates $P$.
$S_2$ is the union of its finitely axiomatizable fragments $T^i_2$. This means that $S_2$ proves the consistency of each fragment $G_i$, but on the other hand, if it proved the consistency of the full quantified propositional calculus $G$, it would imply that $G_i$ polynomially simulates $G$ for some $i$, and this is assumed to be false. To put it differently, the $\forall\Delta^b_1$-consequences of $S_2$ (as well as $S_2$ itself) are not assumed to be finitely axiomatizable.
The correspondence of theories and propositional proof systems also extends to complexity classes. Sets definable by $\Sigma^b_i$ formulas in the standard model of arithmetic are exactly those computable in the $i$-level $\Sigma^P_i$ of the polynomial hierarchy. The theories $T^i_2$ have induction for $\Sigma^b_i$ formulas, and their provably total $\Sigma^b_{i+1}$-definable functions are $\mathrm{FP}^{\Sigma^P_i}$, so these theories correspond to levels of the polynomial hierarchy. On the propositional side, satisfiability of $\Sigma^q_i$ formulas is a $\Sigma^P_i$-complete problem. Taking the union, $S_2$ corresponds to the full polynomial hierarchy $\mathrm{PH}$. However, the complexity class corresponding to $G$ is $\mathrm{PSPACE}$, as satisfiability of unrestricted quantified propositional formulas is $\mathrm{PSPACE}$-complete. Thus, asking $S_2$ to prove the consistency of $G$ is in the same spirit as collapsing $\mathrm{PSPACE}$ to $\mathrm{PH}$ (and therefore to some its fixed level). (Don’t quote me on this. While the collapse of the $T^i_2$ hierarchy does imply the collapse of $\mathrm{PH}$, for propositional proof systems this becomes only a loose analogy.)
In order to give also an upper bound on the consistency strength, the consistency of $G$, and therefore of FPA, is provable in theories corresponding to $\mathrm{PSPACE}$. The best known such theory is Buss’s theory $U^1_2$, which is a “second-order” extension of $S_2$ with comprehension for bounded sets defined by bounded formulas without second-order quantifiers, and length induction for bounded $\Sigma^1_1$-formulas. Notice that things get really messy here, as the first-order objects of $U^1_2$ correspond to second-order objects of FPA, and second-order objects of $U^1_2$ have no analogue in FPA. Alan Skelley formulated an equivalent (technically, RSUV-isomorphic) theory $W^1_1$. This is syntactically a third-order arithmetic, and it is more directly comparable to FPA (as numbers of one theory correspond to numbers of the other, and sets correspond to sets). $W^1_1$ proves the consistency of $G$, and thus of FPA.
Best Answer
The axioms of first-order arithmetic include the induction schema, which says that, for every formula $A(x)$ with free variable $x$, the conjunction of $A(0)$ and $\forall x\,(A(x)\rightarrow A(x+1))$ implies $\forall x\,A(x)$. This is, of course, a special case of the well-known and basic induction property of the natural numbers that says the same thing for any property $A(x)$ whatsoever, whether or not it's defined by a first-order formula. For anyone who (1) understands the natural numbers well enough to grasp the general induction principle and (2) believes that (first-order) quantifiers over the natural numbers are meaningful so that first-order formulas $A(x)$ really define properties, it is clear that the natural number system satisfies all of the first-order Peano axioms, and therefore those axioms are consistent.
A difficulty arises if one adopts a very strong constructivist or finitist viewpoint, doubting item (2) above, i.e., questioning the meaning of first-order quantifiers $\forall z$ and $\exists z$ when $z$ ranges over an infinite set (like $\mathbb N$) so that one can't actually check each individual $z$. From such a viewpoint, the formulas $A(x)$ occurring in the induction schema are gibberish (or close to gibberish, or at least not clear enough to be used in mathematical reasoning), and then the proposed consistency proof collapses.
The chief virtue of Gentzen's consistency proof is that it essentially avoids any explicit quantification over infinite sets. It can be formulated in terms of very basic, explicit, computational constructions (technically, in terms of primitive recursive functions and relations). There is, however, a cost for this virtue, namely that one needs an induction principle not just for the usual well-ordering of the natural numbers but for the considerably longer well-ordering $\varepsilon_0$.
Thus, Gentzen uses a much longer well-ordering, but his induction principle is only about primitive recursive properties, not about arbitrary first-order definable properties. There is a trade-off: Length of well-ordering versus quantification.
I believe the trade-off can be made rather precise, but I don't remember the details. Recall that $\varepsilon_0$ is the limit of the sequence of iterated exponentials $\omega(0)=\omega$ and $\omega(n+1)=\omega^{\omega(n)}$. If we weaken PA by limiting the induction principle to formulas $A(x)$ that can be defined with a fixed number $n$ of quantifiers, then the consistency of this weakened theory can be proved using primitive recursive induction up to $\omega(n)$, as proved by Carnielli and Rathjen in "Hydrae and subsystems of arithmetic". In other words, the trade-off is that an additional quantifier in the induction formulas costs an additional exponential in the ordinal.