[Math] How is first-order logic derived from the natural numbers

logicpeano-axioms

I've heard that first-order logic comes from Peano Arithmetic, yet I can't see how.
We don't need numbers to formulate quantifiers, variables, functions, constants, relations or even sets.
Insted, it is possible to formulate all Peano's axioms using first-order logic and sets, as one can see here http://de.wikipedia.org/wiki/Peano-Axiome, below the header "Ursprüngliche Formalisierung".

So, how is first-order logic generated from the natural numbers? Does it have anything to do with Primitive Recursive Arithmetic?

Best Answer

Just in historical terms, "first order logic comes from Peano Arithmetic" is not correct. Really the two ideas, of extending axiomatic reasoning beyond (Euclidean) geometry into algebra and arithmetic, and of formalising logic to express more than Aristotelian syllogisms could, proceeded hand in hand.

Progress in logic beyond the handling of quantifiers in the syllogism was gradual through the 19th century. Important names include Augustus de Morgan and JS Mill. Gottlob Frege essentially nailed the problem in 1879, introducing the first order quantifiers $\forall, \exists$ and hence codifying logic with the syntax still used today.

Meanwhile mathematicians including De Morgan again, Grassmann, and Peirce, had been formalising the theory of arithmetic. Frege's syntax was obviously helpful for putting sentences like Bolzano's $\epsilon$-$\delta$ definition of continuity into a precise form, and equally for the laws of arithmetic. Peirce published a set of axioms in 1881.

But the important part of the project was to get beyond the "deductive" reasoning of syllogisms to all the "inductive" rules which mathematicians use in natural proofs. These require writing down, not just true first-order statements about numbers (etc), but reasoning about collections of numbers together. Frege (and other philosophers, going way back) called such a collection the extension of a concept. The extension of a mathematical predicate $P(x)$ is just $\{x: P(x)\}$, the collection of everything for which $P$ is true. In particular, with Frege's new logical notation, any first-order formula $\phi(x)$ in the notation is a concept, so $\{x: \phi(x)\}$ should be an extension. Frege also introduced the notation $x \in A$, to mean "$x$ satisfies the concept defining $A$".

Obviously these extensions are now recognisable as (naive) sets. (I can't recall offhand when this word was introduced). Dedekind in 1888 and then more neatly Peano in 1889 gave axioms for arithmetic including reasoning with these sets. Peano's point is that all the use one needs to make of sets in arithmetic comes in the familiar rule of arithmetic induction, which (as had been obvious for ages) can't be expressed in syllogisms. "If the extension of any predicate includes 0, and includes $x+1$ whenever it includes $x$, then it includes all numbers"---Peano was certainly thinking in terms of full Fregean logic.

Then in 1901 Bertrand Russell demonstrated, in a letter to Frege, his paradox of the set $\{x: x \not \in x\}$. This horrified Frege, Dedekind, and everybody else involved with the project of formalising mathematics---it showed that some of the formulas expressible in Frege's syntax aren't consistent concepts. Or possibly it showed that the whole formalism was wrong; but it was obviously too useful for real mathematics to throw away entirely.

So Russell, Zermelo, and all the rest, put in a lot of effort salvaging Fregean logic from the paradox. The obvious cause of the problem with $\{x: x \not \in x\}$ is that it's self-referential (like the ancient Liar paradox), and the natural way to avoid self-reference is to stratify your reasoning. Philosophically the paradoxical set is failing to distinguish between an object and a predicate. Russell's solution was, from the top down, to impose a Byzantine structure of orders and degrees on the sets and quantifiers, to ensure that objects and predicates never conflicted. From this the language of first-order and second-order logic emerged.

The downside of this is that you can obviously do first-order reasoning about (well-behaved) sets themselves---De Morgan's laws, for example---and so you need to replicate all of first-order logic in the second-order case, and so on for a towering hierarchy of identical proofs with different subscripts. The symbol $\in$ becomes very difficult to work with.

Zermelo introduced a bottom-up construction for well-behaved sets which could be guaranteed not to need Russell's structure, to let $\in$ be an ordinary mathematical relation, and yet still to avoid the paradox.

Fortunately Peano's axioms can be made to work in either context. You don't need the full self-referential behaviour of $\in$ to talk about arithmetic, so "second-order" formulas in Russell's sense are adequate. In Zermelo's set theory you can construct a set satisfying the Peano second-order axiom whenever the predicate for inducting over is expressible as a set, so (relative to the set theory) Peano arithmetic works here as well.

Much of the work on Zermelo sets was done, in the 1920s, by Skolem; he refined the Zermelo axioms, especially distinguishing the Axiom of Choice. In the meanwhile a whole school of "Intuitionistic Logic" had been developed, to try to encapsulate more accurately the reasoning mathematicians really need, rather than trying to row backwards from the excessive and inconsistent Fregean system, in which Peano arithmetic seems to work but only haphazardly. This wasn't successful in its own terms (though it has proved useful in other contexts, category theory and so forth). Primitive recursive arithmetic was Skolem's formulation of an intuitionistically valid part of all arithmetic. It wasn't successful in supplanting full Peano reasoning, but is genuinely interesting in its own right.

Finally in 1931 Gödel published his first Incompleteness Theorem, showing that second-order Peano reasoning, or equally arithmetic in the Zermelo universe, could neither of them be proved consistent. At this point mathematicians heaved a sigh of relief and stopped worrying that they might be proved inconsistent. And that is where the story ends.

(All dates off Wikipedia.)

Related Question