Does Taylor series expansion minimize maximum error for polynomial aproximation

approximationconvex optimizationnonlinear optimizationpolynomialstaylor expansion

A common way to fit a polynomial to a function over a range is to perform a Taylor series expansion centered on some point in that range. Then the coefficients of the polynomial approximations become the derivatives of the function you want to fit evaluated at the centered point. Whats not clear to me is if this is actually the best approximation.

Given a function $f$ and a range $[a, b]$ I want to find a polynomial $p$ of fixed degree $n$ that minimizes the following:

$$\max_{x \in [a, b]} \lvert f(x) – p(x) \rvert$$

Assuming you can evaluate all derivatives of $f$, an obvious first attempt is to pick $\frac{a + b}{2}$ take the Taylor series expansion. This would have the same first $n$ directives which seems like a nice property but its not clear that it minimizes the above metric to me.

Does this sort of Taylor series expansion minimize the metric I added here? If not is there a counter example or can a different center be chosen instead? What algorithms exist for minimizing the above metric?

Best Answer

No, I don't think that Taylor series minimizes uniform norm.

Take for example $f(x)=x^2$, $[a,b]=[0,1]$ and $n=1$. Then it can be seen that $$\min_{\deg(p)\leq n}\max_{x \in [a, b]} \lvert f(x) - p(x) \rvert=1/8$$ which is attained for $p(x)=x-1/8$. On the other hand, $$\min_{t\in\mathbb{R}}\max_{x \in [0,1]}|x^2-T_t(x)|=\min_{t\in\mathbb{R}}\max_{x \in [0,1]}|x-t|^2=\frac{1}{4}$$ where $T_t(x)=t^2+2t(x-t)$ is the Taylor series of $x^2$ centered at $t$. So in this case no Taylor Series attains the minimal value.

P.S For best approximating polynomial see Chebyshev approximation.