I recently came across the concept of the Runge phenomenon while studying numerical methods for special functions in the book "Numerical Methods for Special Functions" by Amparo Gil, specifically in Chapter 3, Section 3.2.1. It was highlighted that the Runge phenomenon often occurs when equispaced interpolating points are used in Lagrange interpolation. This raised a couple of questions in my mind. Firstly, if we were to utilize Chebyshev points (roots of the K-th Chebyshev polynomial) instead of equispaced points but still apply Lagrange interpolation at these Chebyshev points, would this approach effectively solve the Runge phenomenon? Additionally, assuming the above is true, I am curious about the specific advantages or additional benefits of employing Chebyshev interpolation in the first place. What distinguishes Chebyshev interpolation using Lagrange polynomials at Chebyshev points, and what advantages does it offer over traditional Lagrange interpolation with equispaced points? I would greatly appreciate any insights or clarification on these questions.
Numerical Methods – Does Lagrange Interpolation at Chebyshev Points Solve Runge Phenomenon?
chebyshev polynomialsinterpolationlagrange-interpolationnumerical methods
Related Solutions
I have found the cause, it seems that the Hermite Interpolation like other Interpolation methods most likely is unstable in the sense that if $\left(\tilde{x}_{i},\tilde{f}(\tilde{x}_{i})\right)$ are the perturbed values of $(x_i,f(x_i))$ say due to round off error or measurement then the resulting interpolating polynomials will wildly deviate from the true Hermite Interpolating polynomial (which of course is unique). Especially when the Lebesgue constant is large. To remedy this increase the working precision in the second line of the code above
x[k_] := N[Cos[(k + 1/2) \[Pi]/(n + 1)],20]
Comment (but probably not solution) too long for the comment field: The series of the exponential gives an alternating series for the given function
$$e^{-x^2}=1-x^2+\tfrac12 x^2-\tfrac16x^4+\tfrac1{24}x^8+\dots+\tfrac{1}{k!}(-x^2)^k+\dots$$
By the quantitative statements of the Leibniz criterion,
$$e_{2n-1}(-x^2)\le e_{2n+1}(-x^2)\le e^{-x^2}\le e_{2n}(-x^2),$$
where $e_n$ is the $n$-th partial sum of the exponential series and the estimate is true for $x^2<2n$. This gives an absolute error of
$$\frac1{(2n)!}x^{4n}\left(1-\tfrac{x^2}{2n+1}\right)\le\left|e_{2n-1}(-x^2)- e^{-x^2}\right|\le\frac1{(2n)!}x^{4n}$$
which shows that in terms of polynomial approximation, one can not do much better than the partials of the exponential series. To get a small relative error, one now needs to chose $n$ large enough to get
$$\frac{e^{15^2}(15)^{4n}}{(2n)!}<ϵ$$
or about
$$2n(\ln(2n)-\ln(225e))>225+|\ln(ϵ)|$$
Which means that you need $n=406,...,419$ to get $ϵ=10^{-4},...,10^{-18}$
Using arithmetic rules of the exponential, one notices that $e^{-x^2}=\left(e^{-(\tfrac{x}{m})^2}\right)^{m^2}$, for instance with $m=15$, and the relative error $ϵ$ is guaranteed if the partial sum $e_n$ is taken such that the relative error of $e_n(-x^2)$ to $e^{-x^2}$ on $[-\tfrac{15}m,\tfrac{15}m]$ is smaller than $\tfrac{ϵ}{2m^2}$. With $m=15$ this requires the easier to control inequality
$$\frac{e^1}{(2n)!}<\tfrac{ϵ}{450}$$
which gives the usual precisions for $n=6,...,12$.
$e_{811}(-15^2)$ on Magma has a gross error, instead of ...e-98 it shows 8e73.
$e_{21}(-\tfrac{x^2}{225})^{225}$ has relative error 5e-19 at $x=15$.
If only basic operations are allowed, then we go back to basic divide and conquer or half-and-square in this case. Take in the above calculus $m$ a nice power of $2$, $m=16$ could work, but let's take $m=32$. Then the error estimate is still largest at the interval end, and the condition at the even larger $x=16$ is
$$\frac{e^{\frac14} \left(\frac14\right)^{2n}}{(2n)!}<\tfrac{ϵ}{2048} \iff \frac{\sqrt[4]e}{2^{4n-11}(2n)!}<ϵ$$
Trying out some small values leads to $n=5$ as a likely candidate, and indeed, $e_9(-\tfrac{15^2}{1024})\cdot\exp(15^2)-1\approx -1e-10$
$n=4$ with $e_7$ works as well with an actual error of about $2e-7$, with error bound of actually $1e-6$.
With less than 30 operations, replace 7 by 9 for higher accuracy:
xx=-x*x/1024;
a=xx;
e=1+a;
for k=2 to 7 do a*=xx/k; e+=a; end for;
for k=1 to 10 do e=e*e; end for;
return e
or with a division
xx=x*x/1024;
a=xx;
e=1+a;
for k=2 to 7 do a*=xx/k; e+=a; end for;
e=1/e;
for k=1 to 10 do e=e*e; end for;
return e
Best Answer
Let's consider a function $f(x)$ over the interval $[0,1]$. For instance, we can use $f(x) = \sin^2(e^{2x})$. The following plot illustrates this function along with its polynomial interpolant using 11 equally spaced nodes:
This look fine, however, when we plot the error as we increase the number of nodes, an interesting pattern emerges:
Contrary to what one might expect, increasing the number of nodes $n$ does not always decrease the error. In fact, the error begins to increase, indicating a loss of convergence, a sign of ill conditioning due to the use of equally spaced nodes. Using the polynomial interpolation error formula (see Lagrange remainder form), we can visualize the error indicator:
Here, we observe that $\Phi(x)$ decreases exponentially at each fixed location in the interval and $\Phi(x)$ is larger at the ends of the interval than in the middle, by an exponentially growing factor. This gap is what can ruin the convergence of polynomial interpolation.
To further illustrate the Runge phenomenon, consider the function $f(x)=1/(x^2+16)$ over $[-1,1]$ using the same approach as above
This instability in convergence is the Runge phenomenon. Chebyshev nodes are introduced to mitigate this issue. Having more nodes near the ends of the interval than in the middle could decrease the error.
By employing Chebyshev nodes, we obtain the following result:
The error remains consistently within machine epsilon, and it stays stable as the number of nodes increases. Importantly, as predicted by the error indicator function, the error is uniform across the interval at each value of $n$.