Early Calculus – Was It Inconsistent?

ho.history-overviewlo.logicnonstandard-analysis

This question does NOT concern the RIGOR, or lack thereof, of the early calculus. Rather the question is of its CONSISTENCY.

George Berkeley wrote in 1734 with reference to the early calculus that such a method is "a most inconsistent way of arguing, and such as would not be allowed in Divinity". This passage is quoted by William Dunham in 2004. Dunham concludes: "Bishop Berkeley had made his point. Although the results of the calculus seemed to be valid … none of this mattered if the foundations were rotten". See page 72 of http://books.google.co.il/books?id=QnXSqvTiEjYC&source=gbs_navlinks_s

On the other hand, Peter Vickers in 2007 challenged "The ubiquitous assertion that the early calculus of Newton and Leibniz was an inconsistent theory" at http://philsci-archive.pitt.edu/3477/ (soon to appear in book form at Oxford University Press), and concluded that this only holds in a limited sense and "can only be imputed to a small minority of the relevant community".

Was the early calculus consistent as far as most practitioners were concerned, as Vickers contended, or was it a most inconsistent way of arguing, as did Berkeley and Dunham?

Note 1. Berkeley claimed that calculus was based on an inconsistency that can be expressed in modern notation as $(dx\not=0)\wedge(dx=0)$. Thus he was using the term "inconsistent" in much the same sense it is used in modern logic.

Note 2. For a closely related thread, see https://math.stackexchange.com/questions/445166/is-mathematical-history-written-by-the-victors

Note 3. There is a related thread at the history SE: https://hsm.stackexchange.com/questions/3301

Best Answer

Coming back to the B.Berkeley critics, there is a common denominator of all known getarounds, both the two mainstream ones (Wstrass and NSA) and exotic ones like the SDG interpretation.

That is, one considers an extension - call it $R^+$ - of the true reals R and a map $R^+ \to R\cup \{\infty\}$ - call it the valuation map. For instance,

1) $R^+$ consists of all convergent infinite real sequences and the valuation map is the `` taking the limit'' map

2) $R^+$ is a nonstandard extension of $R$ and the valuation map is the ``standard part'' map ($\infty$ for infinitely large objects)

3) Nilpotent or any other applicable exotics.

It occurs that the evaluation map cannot be a homomorphism, it always lacks something. For instance the value of a non-0 infinitesimal is 0, the value of its inverse is $\infty$, but $0\cdot \infty=1$ makes little sense in $R$.

This is I believe the only sound way to view the medieval controversies around infinitesimals. That is, accept that a non-0 infinitesimal is not equal to the real number 0, it just has the value 0. Maybe, a devoted scholar of Leibnizz, Euler, etc. (although there is no much of etc. after Euler!) can find a support of this point of view.

Obviously, a modern mathematician would ask for either a concrete mathematically defined model of both $R^+$ and the valuation map - and the two mainstream such models are listed above, with perhaps more yet to come under category 3 - or at least to set it up in the form of calculus of propositions, with rigorous rules of inference albeit w/o a fixed interpretation of objects.

Vladimir Kanovei