It is well known that intuitive set theory (or naive set theory) is characterized by having paradoxes, e.g. Russell's paradox, Cantor's paradox, etc. To avoid these and any other discovered or undiscovered potential paradoxes, the ZFC axioms impose constraints on the existense of a set. But ZFC set theory is build on mathematical logic, i.e., first-order language. For example, the axiom of extensionality is the wff $\forall A B(\forall x(x\in A\leftrightarrow x\in B)\rightarrow A=B)$. But mathematical logic also uses the concept of sets, e.g. the set of alphabet, the set of variables, the set of formulas, the set of terms, as well as functions and relations that are in essence sets. However, I found these sets are used freely without worrying about the existence or paradoxes that occur in intuitive set theory. That is to say, mathematical logic is using intuitive set theory. So, is there any paradox in mathematical logic? If no, why not? and by what reasoning can we exclude this possibility? This reasoning should not be ZFC (or any other analogue) and should lie beyond current mathematical logic because otherwise, ZFC depends on mathematical logic while mathematical logic depends on ZFC, constituting a circle reasoning. If yes, what we should do? since we cannot tolerate paradoxes in the intuitive set theory, neither should we tolerate paradoxes in mathematical logic, which is considered as the very foundation of the whole mathematics. Of course we have the third answer: We do not know yes or no, until one day a genius found a paradox in the intuitive set theory used at will in mathematical logic and then the entire edifice of math collapse. This problem puzzled me for a long time, and I will appreciate any answer that can dissipate my apprehension, Thanks!
Sets in Mathematical Logic
lo.logic
Related Solutions
I would like to question two statements you make because they paint an oversimplified picture, which unfortunately is alluring to mathematicians who do not want to think about foundations (and they should not be blamed for it anymore than I should be blamed for not wanting to think about PDEs).
"Most mathematicians accept as given the ZFC (or at least ZF) axioms for sets." This is what mathematicians say, but most cannot even tell you what ZFC is. Mathematicians work at a more intuitive and informal manner. High party officials once declared that ZFC was being used by everyone, so it has become the party line. But if you read a random text of mathematics, it will be equally easy to interpret it in other kinds of foundations, such as type theory, bounded Zermelo set theory, etc. They do not use the language of ZFC. The language of ZFC is completely unusable for the working mathematician, as it only has a single relation symbol $\in$. As soon as you allow in abbreviations, your exposition becomes expressible more naturally in other formal systems that actually handle abbreviations formally. Informal mathematics is informal, and thankfully, it does not require any foundation to function, just like people do not need an ideology to think. If you doubt that, you have to doubt all mathematics that happened before late 19th century.
"They [logicians] realize that in order to talk of logic they don't need the full power of set theory, so they take logic as God-given instead." I do not know of any logicians, and I know many, who would say that logic is "God-given", or anything like that. I do not think logicians are born into a life rich with the "full power of set theory" which they throw away in order to become ascetic first-order logicians. That is a nice philosophical story detached from reality. The logicians I know are usually quite careful, skeptical, and inquisitive about foundational issues, reflect carefully on their own experiences, and almost never give you a straight answer when you ask "where does logic come from?" Your view is naive and inaccurate, if not slightly demeaning.
If I understand your question correctly, you are asking whether there is a difference between the following two views:
We start with naive set theory and on top of it we formalize set theory.
We start with first-order logic and immediately formalize set theory.
Well, we are proceeding from two different meta-theories. The first one allows us a wide spectrum of semantic methods. We can refer to "the standard model of Peano arithmetic" because we "believe in natural numbers", and we can invent Tarskian model theory without worrying where it came from.
The second method is more restricted. It will lead to syntactic and proof-theoretic methods, since the only thing we have given ourselves initially are syntactic in nature, namely first-order theories. There will be careful analysis of syntax. For advanced methods, however, we will typically resort to at least some amount of "naive mathematics". Ordinals will come into play, it will be hard to live without completeness theorems (which involve semantics), etc.
However, this is not how real life works. The dilemma you present is not really there. A working mathematician does not concern himself with these issues, anyhow, while a logician will likely refuse to be categorized as one or the other breed.
That is my guess, based on the experience that my fellow logicians are complicated animals and it is hard to get to the bottom of their foundational guts.
Many paradoxes are first expressed in a semi-formal way, for example "the least number not describable by fewer than eleven words". They are warning signs that lead us to further analysis and can be resolved in different ways:
We can just get used to a "paradox" and accept it as "truth", e.g., there are infinite sets of different sizes, or there is a real function which is continuous at irrational arguments and discontinuous at rational arguments. There are famous paradoxes in philosophy which would not be considered paradoxes today, such as Zeno's paradox ("How can an infinite sum of positive numbers be finite? No movemement is possible!") and various arguments from Prime Cause ("How could we have an infinite descending chain of causality? God must exist!").
We find the paradox unacceptable and so we need to change something. We might change rules of logic, definitions, or axioms, everything is up in the air.
A paradox which actually proves falsehood, or a statement as well as its negation, is more properly called an inconsistency. An inconsistency is something we can never get used to and so we have to change something. A milder form of paradox is one which does not prove falsehood but just something very counter-intuitive, in which case we have to decide whether to accept it, or admit that our attempt to bring something into the realm of mathematics worked in unexpected ways.
I think this question is about how to tell whether a given "paradox" is of the first or second kind. When should we just "get used" to a paradox and when should we "change things"? In the case of Russell paradox we had no choice but to change something. In the case of Banach-Tarski paradox there is a choice. The accepted view is that we should just get used to it, but there are interesting alterantives which force us to rethink the notion of space. Even though these alternative notions of space are far better suited for probability, measure and randomness than the classical approach, mathematicians are unlikely to adopt them widely out of sheer inertia and historical coincidence. But mathematicians do not like to admit that mathematics is a human activity, and as such subject to sociological and historical trends.
So I suppose my answer is this: when faced with an unacceptable counter-intuitive statement which offers several mathematical resolutions, the choice will be made through social interaction which has some mathematical content, but not as much as we would like to think. Other factors, such as arguments from authority and social intertia will play an important role.
Best Answer
I have been asked this question several times in my logic or set theory classes. The conclusion that I have arrived at is that you need to assume that we know how to deal with finite strings over a finite alphabet. This is enough to code the countably many variables we usually use in first order logic (and finitely or countably many constant, relation, and function symbols).
So basically you have to assume that you can write down things. You have to start somewhere, and this is, I guess, a starting point that most mathematicians would be happy with. Do you fear any contradictions showing up when manipulating finite strings over a finite alphabet?
What mathematical logic does is analyzing the concept of proof using mathematical methods. So, we have some intuitive understanding of how to do maths, and then we develop mathematical logic and return and consider what we are actually doing when doing mathematics. This is the hermeneutic circle that we have to go through since we cannot build something from nothing.
We strongly believe that if there were any serious problems with the foundations of mathematics (more substantial than just assuming a too strong collection of axioms), the problems would show up in the logical analysis of mathematics described above.