[Math] In Maclaurin’s series why do functions approximate accurately for large $x$

taylor expansion

I understand that Maclaurin's series is about approximate derivatives at the origin, so it makes sense that they approximate well at and near the origin, but why, when you begin to add more terms to the polynomial approximation, do they become more accurate?

For example, this is a graph of $\sin(x)$ and the Maclaurin's series of $\sin(x)$ up to the term in $x^5$.

As you can see this is a good approximation of $\sin(x)$ up to about $2$. However if we look at the Maclaurin's series of $\sin(x)$ up to the term in $x^9$.

We can see that if we include more terms it is a better approximation (as to be expected) but is there an intuitive reason as to why it becomes a better approximation for large $x$?

Best Answer

I think the answer depends on what kind of accuracy you're looking for and exactly what function you're dealing with.

The sine function happens to be a relatively "nice" function to do a Maclaurin series for. Recall that if we include terms up to $x^n$ in the Maclaurin series of a function $f(x)$, the error term is $$ \frac{1}{(n+1)!} f^{(n+1)}(\xi)\, x^{n+1}, \text{for some $\xi$ such that $0<\xi<x$.} $$ If $f(x) = \sin(x)$, we know $\left\lvert f^{(n+1)}(\xi) \right\rvert \leq 1$, so as soon as $n>x$, as $n$ continues to increase the error will get smaller and smaller, eventually shrinking to zero. Even if $x$ is very large, it just means you need very large values of $n$.

Even for a function like $f(x) = \sin(2x)$, where the higher derivatives can get quite large, we still have $\left\lvert f^{(n+1)}(\xi) \right\rvert \leq 2^{n+1}$, and the factorial $n!$ still grows faster than $2^n$. We might need to go to $n>2x$ before the approximation is really any good, but that's OK.

But not every function is as "nice" as the sine.

Obvious problems occur with functions that have asymptotes, such as $f(x) = 1/(x+1)$. The Maclaurin series for this function, $1-x+x^2-x^3+x^4-x^5+O(x^6)$, has a radius of convergence equal to $1$, so the series simply does not converge for $x\leq -1$ nor for $x > 1$. Adding more terms will get you a better approximation within the interval $-1 < x \leq 1$, but not generally outside that interval.

Another limitation of the Maclaurin series is that it is based entirely on the derivatives of the desired function at zero. A function does not need to be discontinuous (like $1/(x+1)$) or have an undefined $n$th derivative in order to be "not nice" for the Maclaurin series; it can do other unpleasantly surprising things. Such a function is described on page 86 of an online textbook by John K. Hunter. That function is $$ \phi(x) = \begin{cases} e^{-1/(x)} & \text{if $x > 0$,}\\ 0 & \text{if $x \leq 0$.} \end{cases} $$ All the derivatives of this function are defined everywhere, but every derivative at $x=0$ is zero, and of course at $x<0$ the derivatives are all zero as well. So the Maclaurin series of $\phi(x)$ is just a constant zero, which is a perfect match for the function for $x < 0$ but not for $x > 0$, and the approximation never gets better as we add more terms.

It's as if the function $\phi(x)$ somehow "fools" the Maclaurin series by "sneakily" approaching $1$ at large values of $x$ without giving any clue (in its derivatives at zero) that it was going to do anything but remain a constant zero as $x$ increased.

Now consider the Maclaurin series for $$ g(x) = \sin(x) + \phi(x-1) = \begin{cases} \sin(x) + e^{-1/(x-1)} & \text{if $x > 1$,}\\ \sin(x) & \text{if $x \leq 1$.} \end{cases} $$ That's easy: it's the same as the Maclaurin series for $\sin(x)$, which we already know. But look what happens at $x=2\pi$, for example: the more terms we add to the series, the better it approximates $\sin(2\pi)=0$ at $x=2\pi$, but the actual value of $g(2\pi)$ is about $0.828$.

So the more