[Math] If instantaneous rates of change aren’t that rigorous, how correct is the usage of instantaneous rates of change (like velocity) by physicists

calculusderivativeslimitsmathematical physicssoft-question

According to this answer, instantaneous rates of change are more intuitive than they are rigorous.

I tend to agree with that answer because, in the Wikipedia article on differential calculus, they aren't defining the derivative to be the slope at a particular point. They define it as, "The derivative of a function at a chosen input value describes the rate of change of the function near that input value." Although this isn't wrong, the definition has been written rather safely, and I think that was intentional. They didn't define it as the slope of the graph at a particular point. It is only in the explanations section of the Derivative wiki article that they did that: "The derivative of a function y = f(x) of a variable x is a measure of the rate at which the value y of the function changes with respect to the change of the variable x. It is called the derivative of f with respect to x. If x and y are real numbers, and if the graph of f is plotted against x, derivative is the slope of this graph at each point."

So, are physicists using terms like "instantaneous velocity" merely from an intuitive standpoint? What is the physical significance of instantaneous rates of change?

Best Answer

It's perfectly fine to use intuition in applying mathematics - it's just that in mathematics itself we want rigorous definitions so we can actually prove stuff. We seek definitions that formalize our intuition about something.

Just consider the simple example of finding the derivative of $f(x) = x^2$. Using the "intuitive definition" it's not really clear that this should equal $2x$. You could of course look at a few examples and extrapolate from those that it should be true, but how can you really be sure? In contrast the "hard" definition (which can also be considered to be a rather intuitive one) directly allows you to construct the derivative.

The approach mathematicians often times take is to take some concept, state which properties it should have, try formalizing those and seeing if the resulting thing:

  • already "nails down" a concept well enough or if it's still too general
  • conforms to our intuition.

So we defined the mathematical concept of derivative the way we did, because it corresponds to our intuitive notion of rate of change and thus should be applicable in circumstances where the intuitive thing is asked for.

When applying math in physics, engineering etc. you of course always have to consider whether it makes sense to model some real life phenomenon via the mathematical idealized version: What assumptions go into a derivative? Are they compatible with the real world? Surely some notion of continuity is needed for a derivative. Is the real world continuous? We really don't know, and afaik (not a physicist) we can never find out. That's why physics is more than just building theories - we also need to do experiments to see if our theory corresponds with the real world up to some acceptable margin of error. And judging from experiments and how succesful we are in modelling the real world using differential calculus, it would seem that using the intuition behind derivatives in the real world isn't totally wrong.