[Math] Analogy to the purpose of Taylor series

calculussoft-questiontaylor expansion

I want to know an analogy to the purpose of Taylor series. I did a google search for web and videos : all talks about what Taylor series and examples of it. But no analogies. I am not a math geek and this is my attempt to re-learn Calculus in a better way, to understand Physics and Linear Algebra.

Having an analogy will indeed help to view its use in real life. Learning seems lacking, if the concept can't be applied. Appreciate a more laymen term explanation at this challenging point (for me).

Have read this post as of now: What are the practical applications of Taylor series?. That post (the answers, comments) indeed increases the bar of my expectation for a satisfying answer to my question.

Best Answer

Just to give you an idea of where we're headed, here's the punch line of everything I'm about to say:

If all you know about a function is its first few derivatives at a certain time, the corresponding Taylor polynomial is the best guess at the function you can make with the information you have.

That's pretty dry, though. I'm not really feeling it. So, let me set the scene.


You are piloting an airship across a vast, uncharted continent. In this place, dark clouds obscure the stars at all times, so you navigate the only way you can: using sensitive instruments to record every detail of your motion. Even as the airspeed indicator measures your velocity, an accelerometer is already reporting its rate of change, and a tower of even stranger and more sophisticated sensors track every jolt, snap, crackle, and pop. At the top of the sensor mast, ensconced in a snarl of cables, a powerful computer drinks up the flood of data, assembling a heavily redundant record of your journey.

The air here is so thin and smooth that the instrument readings barely change from hour to hour. If the accelerometer registers the force of a gentle tailwind, you can watch your velocity creep steadily upward for minutes on end, just as the accelerometer promised.

One night, you are woken by a flash of lightning—not just a flash, but a blinding sheet, pouring down every window. You listen for thunder, but none arrives. The sky is inky black; the glowing digits of the clock beside your bunk read 05:36:22. You go back to sleep.

When you wake up, the clock reads 05:36:22.

Cursing, you jump out of bed and dash to the course computer. The column of log data is frozen halfway down the screen; the last entry is timestamped with the same digits hovering on the face of the clock. You empty every drawer in your cabin looking for your old mechanical watch, which confirms that several hours have passed since the lightning strike. It could be days before you get the computer running again. By then, where will you be?


You have no way of knowing. But, looking at that final log entry, you can try to guess. Let's say $x(t)$ is the distance you'll drift in the first $t$ time units after the lightning strike. If the only thing you know is your velocity at the time of the strike, $x'(0)$, the best you can do is hope that you'll keep moving at about the same velocity, so $$x(t) \approx x'(0)\,t.$$ This guess is consistent with all the information you have, because the velocity given by your guess at the time of the strike matches the velocity recorded in the log. You can see this by taking the derivative of both sides of the approximate equation above and then setting $t$ to zero.

Based on your experience traveling in these parts, you can actually make a pretty strong guarantee that your guess isn't too far from the truth. Let's define a new function $$\epsilon_2(t) = x(t) - x'(0)\,t,$$ which measures the difference between your actual and estimated positions. Since $\epsilon_2(0)$ is zero, the fundamental theorem of calculus tells us that $$\epsilon_2(t) = \int_0^t \epsilon_2'(s)\;ds.$$ We saw earlier that $\epsilon_2'(0)$ is also zero, and you can easily check that $\epsilon_2''(t) = x''(t)$. Hence, $$\epsilon_2'(t) = \int_0^t \epsilon_2''(s)\;ds = \int_0^t x''(s)\;ds.$$ If you're confident that the magnitude of your acceleration $x''$ won't go above $M_2$ between times zero and $t$, you can be confident that $\left| \epsilon_2'(t) \right| \le M_2\,t$, and therefore that $$\left| \epsilon_2(t) \right| \le M_2\frac{t^2}{2}.$$ This guarantee is a baby version of Taylor's theorem.


If you know more than just your velocity at the time of the lightning strike, you can make a better estimate of your course. If the last log entry tells you the first $n$ derivatives of your position at the time of the strike, $x'(0)$ through $x^{(n)}(0)$, you can guess that $$x(t) \approx x'(0) \frac{t}{1!} + x''(0) \frac{t^2}{2!} + x'''(0) \frac{t^3}{3!} + \ldots + x^{(n)}(0) \frac{t^n}{n!}.$$ Just as before, this guess is consistent with all the information you have, because its first $n$ derivatives at the time of the strike match the derivatives recorded in the log.

With more information available, you can make a stronger guarantee about the accuracy of your guess. Once again, define a function $$\epsilon_{n+1}(t) = x(t) - \left[ x'(0) \frac{t}{1!} + x''(0) \frac{t^2}{2!} + x'''(0) \frac{t^3}{3!} + \ldots + x^{(n)}(0) \frac{t^n}{n!} \right]$$ measuring the difference between your guess and the truth. Using the same repeated integration technique as before, you can be confident that $$\left| \epsilon_{n+1}(t) \right| \le M_{n+1} \frac{t^{n+1}}{(n+1)!}$$ if you're confident that the magnitude of $x^{(n+1)}$ won't go above $M_{n+1}$ between times zero and $t$.


Now we're ready to hear the punch line again:

If all you know about a function is its first few derivatives at a certain time, the corresponding Taylor polynomial is the best guess at the function you can make with the information you have.

For some functions, you can make your guess as accurate as you want—for times close to the starting time, at least—just by using a Taylor polynomial with more terms. If you use all the terms, extending the Taylor polynomials to an infinite Taylor series, you'll be able to guess the function perfectly for a short period of time! Functions like this are called analytic. A classic example is the function $$x(t) = \frac{1}{1+t^2}.$$ Its Taylor series, $$1 - t^2 + t^4 - t^6 + \ldots,$$ predicts its behavior perfectly when $|t| < 1$.

The Taylor polynomials of a non-analytic function are still good guesses, but there's a limit to how good they can get. Consider, for instance, the function $$x(t) = \begin{cases}e^{-1/t^2} & t \neq 0 \\ 0 & t = 0. \end{cases}$$ At $t = 0$, all the derivatives of this function are zero! Based on its derivatives at $t = 0$, the best you can do is guess that $x(t) \approx 0$. For times close to zero, this guess is actually really good: when $|t|$ is less than $0.2$, $\left|x(t)\right|$ is less than $10^{-10}$, and when $|t|$ is less than $0.1$, $\left|x(t)\right|$ is less than $10^{-40}$. On the other hand, the guess $x(t) \approx 0$ definitely isn't perfect, and using a Taylor polynomial with more terms won't make it any better. The only way to squeeze more accuracy out of it is to look at times closer to zero.

We saw earlier that the $n$th-degree Taylor polynomial of a function will stay accurate for as long as the function's $(n+1)$st derivative stays small. Thus, you might suspect that something funny must be going on with the higher derivatives of $e^{-1/t^2}$—and you'd be right. The higher derivatives of this function stay small for a while, but then spike to enormous levels, with each derivative going more berserk than the last.

Strange as it may sound, this kind of behavior is pretty common in nature. Many solutions to the heat equation, for example, are non-analytic, and the energy levels of a hydrogen atom depend non-analytically on the ambient electric field. Oh, and that function $e^{-1/t^2}$ we've been playing with? It shows up all the time in quantum field theory, as that second link attests.

Analytic functions are common in nature too—so common that, in some basic science classes, they're the only kind of function you'll ever use. In many situations, their ubiquity is explained by a theorem developed by a series of great 19th-century analysts, starting with Augustin Cauchy and ending with Sofia Kovalevskaya. The Cauchy-Kovalevskaya theorem describes an enormous class of partial differential equation problems guaranteed to yield analytic solutions. Using a tiny fraction of its power, you can prove that solutions of the following equations are always analytic:

  • Newton's equation, $\mathbf{F}(t, \mathbf{x}) = m\mathbf{x}''$, whenever the force $\mathbf{F}$ is analytic. This equation shows up not only in mechanics, but also in its twin siblings, electronics and hydraulics.
  • Reaction rate equations, like the one Casey Gray wrote down for the BZ reaction in this paper. Many ecological population models are based on reaction rate equations, like the logistic equation and the Lotka-Volterra equation, in which the reactants are organisms rather than molecules. Reaction rate equations even appear in descriptions of things that look nothing at all like chemical reactions: that's what happened when Edward Lorenz set out to build a simplified model of a convection cell. The solutions of the Lorenz equation may be chaotic, but they're also analytic!
  • The Friedmann equation, $$a' = \sqrt{\tfrac{8\pi}{3} a^2\,\rho(a) - k},$$ with the range of $a$ and the domain of $\rho$ restricted to the positive real numbers, whenever $\rho$ is analytic and the curvature parameter $k$ is zero or negative. This includes the typical case where $k = 0$ and $\rho(a) = \rho_\text{r} a^{-4} + \rho_\text{m} a^{-3} + \rho_\Lambda,$ describing a flat universe that has radiation density $\rho_\mathrm{r}$, matter density $\rho_\text{m}$, and dark energy density $\rho_\Lambda$ when $a = 1$.

This ought to give you some idea of why analytic functions might be common in fields like physics, chemistry, biology, and cosmology.

Related Question