First of all, you need to define x0
before you use it. Then you are just plotting a single point each time which is visible but only one pixel. I recommend using
plot(h,err,'o-');
loglog(h,err,'o-');
This shows you where the points are. As @mathreadler pointed out, if you want to see all points of each iteration, you have to plug the plot commands into the loop and use hold, or you save the results for each $i$ in a vector:
f=@(x)cos(x)
x0=pi/4;
f_exact=-sin(x0)
for i=1:21;
h(i)=10^(1-i);
f_approx=(f(x0+h(i))-f(x0-h(i)))./(2*h(i));
err(i)=abs(f_approx-f_exact);
d_error(i)=abs(((h(i)^2)*sin(x0))./6);
end
plot(h,err,'o-');
figure; %opens a new plot window
loglog(h,err,'o-');
The loglog plot loooks decent enough, The problem you are running into (I think) is a loss of accuracy due to cancellation (i.e. when you substract two numbers that are very close).
As a third order method, the error has a component of $h^3$ due to the truncation errors and a component $\mu/h$ due to accumulation of rounding errors in the $T/h$ steps. The latter takes over and dominates the total error somewhere around $h=\sqrt[4]\mu\sim 2^{-13}$. This $\mu h^{-1}$ gives the exponent of $-1$ that you see in the last order estimates. That it starts that early indicates that the method error is extremely small for this example.
In my own implementation of BDF using single RK4 steps in the initialization and an order 3 predictor in a naive PEC scheme, I get errors and estimated orders as
1.58e-05 6.7402e-07 2.5267e-08 8.723e-10 2.8788e-11 7.3896e-13 1.5987e-14 0 0
4.551 4.7374 4.8563 4.9213 5.2838 5.5305 20.608 -0
This confirms your results, the differences are due to differences in the implementation. (It seems impossible to reproduce the effect of the floating point accumulation in this step size range.)
The interpretation is that the example is too polynomial. The error coefficients of multi-step methods are mostly the higher derivatives of the solution, but these are zero for this equation with quadratic polynomials as solutions.
So what you are essentially seeing in the order estimates is the propagation of the 5th order local error of the first two RK4 steps used for the initialization of the method. That is, the method error behaves now like $h^5$, so that the cross-over with the floating point error $\mu h^{-1}$ is at about $h=\sqrt[6]\mu\sim 2^{-8.5}$, which is about what your data series shows.
Try a different example, $y'(t)=-10(\sin(y(t))-\sin(\cos(t)))-\sin(t)$ with exact solution $y=\cos(t)$. Then the working range starts at about $h=2^{-7}$, for larger step sizes the leading error term is not dominant enough to determine the order. I get for $h=2^{-k-6}$, $k=1,2,...,9$, the errors and estimated orders
2.0842e-08 2.417e-09 2.9028e-10 3.5547e-11 4.3969e-12 5.4867e-13 7.0499e-14 1.8208e-14 1.2323e-14
3.1082 3.0576 3.0297 3.0152 3.0025 2.9603 1.9531 0.56314
This exhibits the order 3 in the working range of step sizes and a break down in order due to floating point noise accumulation as expected at around $h=2^{-13}$, in the positions $k=7,8,9$.
This change of the origin of the error can also nicely be seen in plots of the error profiles, where the difference to the exact solution is divided by $h^3$ to get a common scale.
The first 6 plots are almost identical, showing the dominance of the leading error term, while the last 3 show the increasing influence of the pseudo-random floating-point error accumulation.
Best Answer
There are probably a few ways to answer this question. Perhaps you could generate the approximations of $f'(0)$ using the following code:
Then compute the error as
You could take $n$ much larger (say, $n = 10^6$), then find the minimum $h$ using
min
: