Numerical Methods – Hermite Interpolation of e^x: Analyzing Derivative Behavior

approximationinterpolationnumerical methods

I am trying to understand Hermite Interpolation. Here is my pedagogical example.

I want to approximate $f(x)=e^x$ on the domain $[-1,1]$ using Hermite interpolation.

I choose the Chebyshev zeros of $T_{n+1}(x)$ as the interpolating points, i.e. the points $x_0,x_1,\dots,x_n$, where,
$$
x_i = \text{cos}\left(\left(k+\frac{1}{2}\right)\frac{\pi }{n+1}\right).
$$

At each point $x_i$ I construct the list $f(x_i),f'(x_i),…,f^{(m)}(x_i)$ for $i=0,\dots,n$. That is I want to use not only the values of $f$ at the points $x_i$ but also its first $m$ derivatives at these points. The error in the approximation by the Hermite Interpolation Polynomial $P_n(x)$ is given by,

$$
E_{n}(x)=f(x)-P_{n}(x)=\frac{f^{(n+1)}(\xi_{x})}{(n+1)!}\prod_{k=0}^{n}(x-x_{i})^{m}
$$

where where $\xi_x \in [-1,1]$. For $f(x)=e^x$ this can be easily bounded,

$$
\left|E_{n}(x)\right|\leq\frac{e}{(n+1)!}\prod_{k=0}^{n}(x-x_{i})^{m}
$$

Of course if $m=0$ then the Hermite interpolating polynomial is just the Lagrange Interpolating polynomial. The problem occurs when I increase $m$. The number of derivatives to include in the interpolation procedure at each point $x_i$.

For example suppose $n=15$ and $m=10$, then according to the bound the error at $x=1$ should be less than 10^-35 (or something ridiculously small). However the resulting Hermite polynomial oscillates wildly and grows without bound in the interval $[-1,1]$. This reminds me much of Runges phenomenon, but this contradicts the error bound for the Hermite interpolating polynomial $P_n(x)$ and I am using the chebyshev zeros as the interpolating points. I observed the same behaviour when trying to approximate $tan(x)$ on a closed sub-interval of $(-\pi/2,\pi/2)$ and the inverse error function using Hermite interpolation.

Has anyone else observed this? Is this expected behaviour? Or have I overlooked something. As it stands it would "seem" Hermite interpolation is not very useful, but I would have expected it to be better than Lagrange interpolation. Have I missed the point?? Can anyone shed some light please??

Incidentally I build the Hermite Interpolating polynomials using Mathematica's built in function InterpolatingPolynomial[] as follows.

n = 15; m = 10;
x[k_] := N[Cos[(k + 1/2) \[Pi]/(n + 1)]]    
interpolationPoints = 
  Table[Join[{{N[x[j]]}}, Table[D[Exp[y], {y, k}] /. y -> x[j], {k, 0, m}]], {j, 0, n}] ;
hermiteP[x_] = InterpolatingPolynomial[interpolationPoints, x];
Plot[{Exp[x], hermiteP[x]}, {x, -1, 1}]
Plot[Exp[x] - hermiteP[x], {x, -1, 1}]

If I reduce the number of derivatives $m$ the result gets better. But I thought the result should get better as I increase $m$ not the other way around.

Best Answer

I have found the cause, it seems that the Hermite Interpolation like other Interpolation methods most likely is unstable in the sense that if $\left(\tilde{x}_{i},\tilde{f}(\tilde{x}_{i})\right)$ are the perturbed values of $(x_i,f(x_i))$ say due to round off error or measurement then the resulting interpolating polynomials will wildly deviate from the true Hermite Interpolating polynomial (which of course is unique). Especially when the Lebesgue constant is large. To remedy this increase the working precision in the second line of the code above

x[k_] := N[Cos[(k + 1/2) \[Pi]/(n + 1)],20]