[Math] Why have mathematicians used differential equations to model nature instead of difference equations

differential equationsho.history-overviewsoft-question

Ever since Newton invented Calculus, mathematicians have been using differential equations to model natural phenomena. And they have been very successful in doing such.

Yet, they could have been just as successful in modeling natural phenomena with difference equations instead of differential equations (Just choose a very small $\Delta x$ instead of $dx$). Furthermore, difference equations don't require complicated epsilon-delta definitions. They are simple enough for anybody who knows high school math to understand.

So why have mathematicians made things difficult by using complicated differential equations instead of simple difference equations? What is the advantage of using differential equations?

My question was inspired by this paper by Doron Zeilberger

""Real" Analysis is a Degenerate Case of Discrete Analysis"
http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/real.html

Best Answer

Although small discrete systems are easy to work with, continuum models are easier to deal with than large discrete systems. Whether or not nature is fundamentally discrete, the most useful models are often continuous because the discreteness can only occur in very small scales. Discreteness is useful to include in the model if it occurs in the situation we are interested in. I think this is to a large extent a question of scales of interest.

For example, if I have a mole of gas in a container, I could well model it as individual particles. But if I want a simpler model to work with and I am only interested in the behaviour at scales well above the atomic one, the usual "continuous" fluid mechanics is a good choice. This is because at such scales the gas is essentially scaling invariant (it obeys similar laws if you zoom in) and thus calculus becomes applicable (and very powerful). This is of course not true if I go all the way to the atomic scale, but I am not interested in that scale, so it does not matter if my model treats gas in the same way at those scales as well. Large scale continuous quantities like pressure and density give a good understanding (including the ability to make good predictions quickly) and that should not be neglected. (Of course, if I want something more coarse, I can go to a thermodynamic description. Either way, modelling includes a step where the number of particles is taken to infinity to simplify mathematics.)

The "scales of interest" phenomenon happens in both directions; we may neglect both too small and too large scales. For example, it might be a good idea to model a long rod by an infinitely long one (thus in a sense removing discreteness from the model). Then one can apply Fourier analysis or any other such tools that assume that the rod is infinitely long and mathematics becomes easier. This is maybe more common with respect to time than length: Fourier or Laplace transforms with respect to time are used for systems that have finite lifetime. If we are not interested in very large scales, we can assume our system to be infinitely large.

Discrete models are probably useful if nature has genuinely discrete structure (regarding the physical system in question) and we are interested in phenomena at the scale where discreteness is visible. But seen on a larger scale, a discrete model would contain something (particles or some other discrete structure) that we cannot measure and might not even be interested in. Something that cannot be measured and does not have a significant impact on the behaviour of the system should be left out of the model. This is related to the observation that continuum models often work well for large discrete systems.

Let me conclude with an observation that is easy to miss because we are so used to it: At human scales nature seems continuous.