[Math] The rate of convergence for polynomial interpolation of different functions

chebyshev polynomialsinterpolationnumerical methods

I have written some code in Matlab to polynomially interpolate functions with Chebyshev nodes using the Chebyshev base ( I calculate the coefficients with respect to the Chebyshev base and multiply them). For higher degrees of the polynomial, it will approach the function.

I've calculated the errors for certain functions and plotted these in function of the degree of $p$. The functions I'm interpolating are $f_1 (x) = \frac {1}{1+25x^2}$, $f_2 (x) = |x|$, $f_3 (x) = |\sin (5x)|^3 $ and $f_4 (x) = \cos (20x) $. I've separated their errors into two graphs depending on the order of magnitude.
Figure 1 Figure 2 Seeing these however has raised some questions.

First of all I'm confused about how these polynomials converge towards the actual function. According to my textbook the should either converge with $\mathcal{O}(C^{-n})$ or $\mathcal{O}(n^{-v})$ with $n$ the degree of the polynomial. But I'm lost as to how one would see this from the graph (especially since it's y-axis is plotted logarithmically) and how one would go about calculating $C$ or $v$.

Secondly I'm also confused about the oscillations of the error. Why does the error change with an even or odd degree polynomial?

And lastly I was also wondering what causes the change in convergence between functions. The graphs show that the functions that have an absolute value converge a lot slower than the other two. Is this because they aren't as smooth?

Best Answer

The oscillation of the error is typical for your functions. In general, any bounded function of $n$ is $\mathcal{O}(1)$ so multiplying by a bounded oscillating function is consistent with the error estimates. In your first graph the error estimates look like $\mathcal{O}(C^{-n})$ except for small $n$. In your second graph the error estimates look like $\mathcal{O}(n^{-v})$ where $v$ is $1$ or $2$. To calculate the constants you should skip the oscillations by using only even $n$ or odd $n$ and skipping values of $n$ less than $20$. Then use best linear fit for logarithm of the error.

Related Question