[Math] Are there any nontrivial examples of contradictions arising in non-foundational or applied math due to naive set theory

big-listfoundationslogicparadoxesset-theory

I understand that naive set theory, whose axioms are extensionality and unrestricted comprehension, is inconsistent, due to paradoxes like Russell, Curry, Cantor, and Burali-Forti.

But these all seem to me like pathological, esoteric, ad-hoc examples, that really only matter in foundations, and most non-foundational and applied mathematics wouldn't go anywhere near touching them.

Am I wrong here? If we were to do non-foundational math over naive set theory and just ignore the paradoxes, what problems might we face? Yes, I know that we can technically prove $0=1$ because logic, but I'm looking for more interesting examples, particularly ones that could arise without having to specifically look for them.

Question: Notwithstanding technicalities like explosion, are there any "natural" examples of contradictions arising in non-foundational or applied math due to the paradoxes of naive set theory?

Has anyone ever arrived at a false statement in, say, algebra or number theory, using naive sets?

edit: I'd like to be clear that I'm playing devil's advocate. I'm of course aware that relying on an inconsistent theory is in general a bad idea, but of course not all flawed structures collapse immediately. How far could we go in practice before we ran into problems?

edit: By "non-foundational" I basically mean anything outside of set theory or mathematical logic. If the question of a theory's consistency comes up at all (this thought experiment notwithstanding), then it's probably "foundational". But it's of course fuzzy.

Best Answer

With a strict enough definition of "non-foundational mathematics" I think the answer is probably "no" (although I would be very interested in seeing potential examples.) However, this shouldn't make mathematicians working on such mathematics feel safe about using unrestricted comprehension. The reason is that it's not always clear a priori what mathematics will turn out to be "foundational".

Indeed, people may start working on some mathematics that seems non-foundational but then turns in a foundational direction. For example, Cantor's development of set theory was a natural consequence of his study of sets of uniqueness in harmonic analysis.

If someone working in a supposedly non-foundational branch of mathematics ended up with a contradiction by using unrestricted comprehension, then with the benefit of hindsight we could say that he or she must have been working in an area related to foundations after all.

It might seem like cheating to make such a declaration after the fact, but perhaps it is not: It seems likely that, from a novel use of unrestricted comprehension to obtain a contradiction, one could obtain a novel use of replacement to obtain a theorem that could not have been obtained without replacement (i.e. using only restricted comprehension). I say this because replacement is a natural intermediate step between restricted and unrestricted comprehension.

Mathematics that uses replacement in an essential way is often considered ipso facto to be foundational. So I think it is likely that mathematics that uses unrestricted comprehension an an essential way (to the extent that it can be salvaged) would be considered foundational as well.

(This answer doesn't address the question of how long, on average, it would take people using unrestricted comprehension in non-foundational-seeming areas of mathematics to run into problems. I think that question is very interesting but probably also very hard to answer.)