Peano arithmetic is relatively strong, and decidable theories of arithmetic like Presburger Arithmetic are fairly unusual. Here's a "tower" of three theories between the two, of increasing strength:
- Robinson's Q, which is an induction-free axiomatisation of arithmetic, that gives addition and mutliplication as its only function constants. It cannot prove exponentiation total;
- Exponential function arithmetic (EFA) has an exponentiation operator and a very limited form of induction. It cannot prove iterated exponentiation (a.k.a. tetration) total;
- Primitive recursive arithmetic is sometimes axiomatised as Peano arithmetic but with induction restricted to formulae without universal quantification (it is more usually given as a purely equational theory). It cannot prove the Ackermann function total.
Note how I've distinguished them by which arithmetic operators thay have, and which they cannot prove total. This is a very natural measure of the strength of theories of arithmetic: an essential part of proving the incompleteness theorem is providing a diagonalisation operation, which we can do only if we have multiplication. These theories have this, and so all of these theories admit the incompleteness theorems.
I'll try to answer your questions one by one hoping to be clear enough without making reference to very deep aspects (although your questions are already very deep):
- "So, since it is complete, decidable and consistent why don't we use Presburger arithmetic instead? I know that it may has something to do with it's weakness, but it's not very clear what is the meaning of this weakness, presumably one can prove less things with it perhaps?"
You are right in saying that Presburger Arithmetic proves less things. Mathematicians working in Number Theory usually like to talk about multiplication of natural numbers in general: an easy example is that they would like to have the property that "multiplication commutes for all natural numbers"; another example is that they like to talk about all the prime numbers at once. However with Presburgeer Arithmetic you cannot do that, if you check the axioms (http://en.wikipedia.org/wiki/Presburger_arithmetic) they don't talk about multiplication. So, you can only go number by number to define it. Short answer: Presburguer Arithmetic has many cute logical properties, but not enough mathematical richness, and that's why it is not "used" practically.
- "My other doubt about Presburger arithmetic is why is it a problem of it having no multiplication? I always see multiplication being defined after the operation of sum: The multiplication is just a given number of sums, can't we just make the multiplication and use it anyway or the multiplication causes problems?"
Yes we can, but as said earlier, we can only do it one number at a time. We would have to prove something like "the successor of the successor of zero (two) multiplied (imagine we just defined it as you suggested) with the successor of zero (one) is equal to the successor of the successor of zero." We would have to prove a theorem like this for every natural number, and it is not physically possible. We would also have to prove commutativity and associativity for multiplication by 2 for each natural number. In summary, this is not a very practical thing to do.
- "Also, from the first theorem of incompleteness: "Any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete." Doesn't Presburger arithmetic have an elementary arithmetic?"
Allow me to assume you read that in a Wikipedia article. Usually this kind of articles are written in a way that the general public understands it. However in mathematical terms a "theory capable of expressing elementary arithmetic" can be read as "a theory that at least can prove all of the axioms of PA". Hence, the answer to your question is "no", Presburger arithmetic doesn't have an elementary arithmetic. For more specific information of what you could interpret as "elementary arithmetic" you can check the MO question: https://mathoverflow.net/q/118183/32592
- "Why can't we define $a\times b$ for all natural numbers?"
This is rather hard to answer as I have not worked with the axioms of Presburger Arithmetic before (so take into account that the rest of the paragraph might not be very accurate). But let's try to attack another problem to get an idea of why this can't be done: How do you write in first order logic that there are two objects? $\exists x\exists y(x\neq y)$. How do you write that there are three objects? $\exists x\exists y\exists z(x\neq y\wedge x\neq z\wedge y\neq z)$. And so on. How do you write that there are infinite things? You would have to use an infinitely long formula which is physically impossible. If you want to define multiplication in Presburger Arithmetic, you would have to do something similar and you would need an infinitely long formula too.
One could also think that if Presburger arithmetic lacks multiplication, then we can do arithmetic without addition and just having multiplication. It turns out that such a system is also complete and consistent but it still has the practicality issue pointed above: https://mathoverflow.net/a/19874/32592
Best Answer
Essentially, yes, but I believe the undecidability of Peano arithmetic (henceforth, PA) follows from the way the proof of Gödel's incompleteness theorem goes, rather than being a consequence of the fact that PA is incomplete. The proof (outlined below) starts by showing that PA can talk about computable relations, and goes on to show from this how you can construct an unprovable sentence. However, we can take a different approach to show that PA is undecidable: if PA can talk about computable relations, then you can formulate a sentence in the language of PA that is true if and only if a given algorithm halts / does not halt. (Briefly: An algorithm is the same thing as a computable partial function, and an algorithm halts on some input if and only if the corresponding partial function is defined on that input.) So if an algorithm can decide whether arbitrary sentences in PA are provable or not, we have an algorithm which solves the halting problem... but there are nonesuch. So the key point is that PA is rich enough to talk about computation. An essential ingredient for talking about computation is the ability to encode a pair of numbers as a number and the ability to recover the original pair from the encoding. It's not immediately clear to me that $+$ is insufficient to do this, but it's certainly plausible that you need at least two binary operations.
Here is a sketch of the proof of Gödel's first incompleteness theorem: First let's select a sufficiently powerful theory $T$, e.g. PA, and assume that it is a consistent theory, i.e. does not prove a contradiction.
Note I haven't said anything about whether $P(n)$ is actually true. It turns out there is some subtlety here. Gödel's completeness theorem tells us that everything that can be proven in a first-order theory is true in every model of that theory and that every sentence which is true in every model of a first-order theory can be proven from the axioms of that theory. With some stronger consistency assumptions, we can also show that $\lnot P(n)$ is also not provable in PA, and this means that there are models of PA where $P(n)$ is true and models where it is false.
The key point is that the phrases "$n$ encodes a formula ..." and "$m$ encodes a valid proof of ..." are strictly outside the theory. The interpretation of a particular number as a formula or proof is defined externally, and we only define it for "standard" numbers. The upshot of this is that in a model $\mathcal{M}$ of PA where $P(n)$ is false, there is some non-standard number $m \in \mathcal{M}$ which $\mathcal{M}$ "believes" is a valid proof of $P(n)$, but because it's non-standard, we cannot translate it into a real proof.