[Math] Statements which were given as axioms, which later turned out to be false.

big-listca.classical-analysis-and-odesho.history-overviewmathematical-philosophy

I know that early axiomatizations of real arithmetic (in the first half of the nineteenth century) were often inadequate. For example, the earliest axiomatizations did not include a completeness axiom. (For example, there is no completeness axiom in Cauchy's Cours d'Analyse).

I also know that mathematicians in the early nineteenth century had some false beliefs about real analysis which were later uprooted as the process of rigorization continued. For example, it was widely believed that a continuous real function must be differentiable except at isolated points – a claim which Weierstrass refuted in the 1870s by defining a function that is continuous everywhere but differentiable nowhere. (http://en.wikipedia.org/wiki/Weierstrass_function).

What I'd like to know is whether at any point an "axiom" was proposed for real arithmetic, which subsequently turned out to be false.

I'd also like to hear about such cases from other branches of mathematics.

However, I'm not so interested in cases of inconsistent sets of axioms (e.g. Gottlob Frege's Grundgesetze, or Church's first formulation of the lambda calculus).

The best example I've found so far is Leibniz's "principle of continuity", according to which "what is true up to the limit is true at the limit". Apparently this was sometimes called an "axiom" although it is obviously not true in general. (I'm getting this from Lakatos's Proofs and Refutations, pg. 128). I'm not entirely happy with this example, because it's so obviously false that I can't believe that it was accepted, except as a heuristic.

Thanks in advance!

Best Answer

Another example from real analysis would be the question of the pointwise convergence of the Fourier series of a continuous function (defined on a closed interval). Many people, including Dirichlet and even the master rigorist Weierstrass himself, believed that the Fourier series of such a function converges pointwise everywhere to the function itself. Some clung on to this belief so strongly that they even viewed it as an infallible axiom.

Hence, one can imagine the great upset when, in 1876, Paul du Bois-Reymond proved the existence of a continuous function whose Fourier series diverges at a point. His proof is non-constructive and uses a method called the principle of condensation of singularities. I have absolutely no idea how the method works, but I do know of a very common proof that uses the Baire Category Theorem (using the Baire Category Theorem, one can also prove the existence of continuous functions that are not differentiable at any point).

After the dust had settled in the wake of du Bois-Reymond's seismic discovery, people started fervently believing that there should exist a continuous function whose Fourier series diverges everywhere - an opinion that lay on the other extreme! Andrei Kolmogorov inadvertently lent support to this claim by exhibiting, in 1926, an $ {L^{1}}([- \pi,\pi]) $-function whose Fourier series diverges everywhere. However, there was great upheaval once more in Fourier-land when the combined efforts of Lennart Carleson and Richard Hunt in the late 1960's showed that the Fourier series of any $ f \in {L^{p}}([- \pi,\pi]) $ converges almost everywhere to $ f $, for all $ p > 1 $ (this result subsumes the case of continuous functions). During an interview with the AMS, Carleson revealed that he had originally tried to disprove his result (pertaining to $ p = 2 $), but in the end, his failure to produce a counterexample convinced him that he should be working in the other direction instead.

Therefore, in the field of Fourier analysis, viewpoints have changed and cherished beliefs have been destroyed - twice.