[Math] What makes a Maclaurin Series special or important compared to the general Taylor Series

calculuspower series

I realize that the Maclaurin Series is a special form of the Taylor Series where the series is centered at $x=0$, but I have to wonder what's special about it such that it deserves its own special designation? On that point, how would you know (or care) which point to choose as the center of a Taylor Series?

Best Answer

Expanding on the comment above, the idea is that we really like the expression

$$ \sum_{k=0}^\infty a_k z^k, $$ simply because it is easy to manipulate and involves less writing than a series with powers of $(z-a)$. So a lot of the time we like to shift our function so that the "point of interest" is simply $0$ (mathematicians try to be efficient, I suppose).

Typically we expand in a Taylor series (or more generally, a Laurent Series) about the point $z=a$ to investigate the behavior of $f$ near $a$. Is $f$ well behaved, or does it blow up? Can it be approximated using polynomials? If so, how good is this approximation and how far away from $a$ will it hold? This third question is the basis of many classical numerical analysis algorithms, including numerical differentiation and integration, as well as solution methods for ODEs.

The analysis of these methods relies heavily on Taylor series - for example, say we're at $x=a$ and want to approximate the value of the function $f$ at $a+h$, a little distance away. The Taylor series about $x=a$ reads:

$$ f(x)=f(a)+f^\prime(a)(x-a)+\frac{f^{\prime\prime}(a)}{2}(x-a)^2+O((x-a)^3) $$ where the "big-O-$(x-a)^3$" means a quantity that grows as a constant multiple of $(x-a)^3$. If we evaluate this Taylor approximation at $x=a+h$, we arrive at the nice, simple expression

$$ f(a+h)=f(a)+hf^\prime(a)+\frac{h^2}{2}f^{\prime\prime}(a)+O(h^3) $$ This says that if we know the value of the function and its first and second derivatives at $x=a$, we can approximate the value of $f$ at $a+h$ to an accuracy of $h$-cubed. So, for instance, if $h=0.1$, our approximation will only be off by a constant multiple of $0.001$. (This constant, incidentally, will depend on how bad the third derivative is near $a$).

Of course, I'm only using this "numerical" idea as an example of why we might expand the Taylor series at a location other than 0 - the idea has plenty of other uses.