[Math] How did Newton and Leibniz actually do calculus

calculusmath-history

How did Leibniz know to write derivatives as $$\frac{dy}{dx}$$ so that everything would work out? For example, the chain rule: $$\frac{dy}{dz}=\frac{dy}{dx}\frac{dx}{dz}$$ Integration by Parts: $$xy=\int d(xy)=\int x\,dy+y\,dx \implies \int x\,dy =xy-\int y\,dx$$ Separable differential equations: $$\frac{dy}{dx}=\frac{x}{y}\implies y\,dy=x\,dx\implies y^2-x^2=C$$ Even basic derivatives such as $$\frac{dx}{dx}=1$$ It seems like they cancel!

Everyone I ask always says either 1) it is essentially a lucky accident, 2) presents a "counterexample" that I usually don't think is valid, or 3) says that it can be made rigorous but that's very tedious to do… but clearly Leibniz was not in any of the three situations proposed. He must have had some reason for knowing why his notation worked so well – after all, he invented it.

As for Newton, did he know the same things as Leibniz? How come he wasn't able to come up with an equally useful notation – did he perhaps think about calculus differently?

Best Answer

Leibniz regarded $dx$ and $dy$ respectively as infinitely small increments of $x$ and $y$, so that $dy/dx$ is the ratio of the infinitely small change in $y$ corresponding to the infinitely small change $dx$ in $x$. Thus when $dy$ is $7$ times as big as $dx$ at a particular, point, then at that point $y$ is changing $7$ times as fast as $x$.

Leibniz regarded $\displaystyle\int_a^b f(x)\,dx$ as a sum of infinitely many infinitely small numbers $f(x)\,dx$. Think of $dx$ as the length of an infinitely short interval on the $x$ axis, so $f(x)\,dx$ is the infinitely small area under the curve over that interval.

I answered a related question here.

Introductory calculus courses that conceal these matters are grossly dishonest.