Why doesn’t the Taylor expansion at 0 around $ e^{-1/x^2} $ converge to the function itself

real-analysistaylor expansion

I know that if you take the Taylor expansion at $x = 0$ of $e^{-1/x^2}$, you get $f(x) = 0$. However, I was wondering if a rigorous proof could be shown of why the Taylor expansion and the actual function differ at every point (other than 0) in this case. I was thinking of using the Lagrange Remainder:
$$
R_n(x) = \frac{f^{(n)}(z)x^n}{n!}
$$

where $z \in (0, x)$ for any $x$. I tried to show why $\lim_{n \rightarrow \infty} R_n(x) \neq 0$ but I was running into trouble. Specifically, it seems like the $n!$ term grows faster than the top, alongside the fact it was quite hard to characterize the value of $f^{(n)}(z)$. Any help would be appreciated.

Best Answer

Since the Maclaurin polynomial is zero for each integer $n$, the remainder term is just $f$ itself, and since this does not vanish for $x\neq 0,\ f$ is not represented by a Taylor series ar $x=0.$

Related Question