Ratio of successive errors does not converge to $2^{-p}$.

MATLABnumerical methodsoctaveordinary differential equations

Assume a multistep method with order $p$. Then for the global error we
have: $$ \| y_{n+1} – y(x_{n+1}) \| = O(h^p) \leq K h^p $$ Given the
above, if we divide the step $h$ by $2$, then $$
\frac{e_{new}}{e_{old}} \approx 2^{-p} $$
where $e$ denotes the global
error.

Having this in mind, I'm trying to test it, using bdf3 method, with the problem:

$$
\begin{cases}
y' = 2\sqrt{y-1} \\
y(0) = 5
\end{cases}
$$

for $t \in [0,2]$, with analytical solution: $y(t) = 1 + (t+2)^2$.

However, the sequence of $-\log_2(\frac{e_{new}}{e_{old}})$ doesn't converge to $3$ as expected, instead its value after some iterations seems to be around $-1$, which doesn't make sense to me.

Here's the Matlab script:

clf

% Initialize problem
y0 = 5;
t0 = 0;
tfinal = 2;
errors = [];
error_ratios = [];

for k = 2:10
    h = 2^(-k);

    [tout, yout] = bdf3('f3', t0, tfinal,h,y0);

    if k == 2
        err = max(abs(yout-f3true(tout)));
        errors = [errors err];
    else
        err_prev = err;
        err = max(abs(yout-f3true(tout)));
        err_ratio = -log2(err/err_prev);
        errors = [errors, err];
        error_ratios = [error_ratios, err_ratio];
    end
end
format short g;

disp(errors);
disp(error_ratios);

Output:

1.6941e-05   7.0892e-07   2.6055e-08   8.8711e-10   2.8844e-11   5.8442e-13   7.7804e-13   1.2896e-12   2.9665e-12

       4.5787        4.766       4.8763       4.9427       5.6251     -0.41284     -0.72904      -1.201

Best Answer

As a third order method, the error has a component of $h^3$ due to the truncation errors and a component $\mu/h$ due to accumulation of rounding errors in the $T/h$ steps. The latter takes over and dominates the total error somewhere around $h=\sqrt[4]\mu\sim 2^{-13}$. This $\mu h^{-1}$ gives the exponent of $-1$ that you see in the last order estimates. That it starts that early indicates that the method error is extremely small for this example.

In my own implementation of BDF using single RK4 steps in the initialization and an order 3 predictor in a naive PEC scheme, I get errors and estimated orders as

1.58e-05  6.7402e-07  2.5267e-08   8.723e-10  2.8788e-11  7.3896e-13  1.5987e-14           0        0
       4.551      4.7374      4.8563      4.9213      5.2838      5.5305      20.608          -0

This confirms your results, the differences are due to differences in the implementation. (It seems impossible to reproduce the effect of the floating point accumulation in this step size range.)

The interpretation is that the example is too polynomial. The error coefficients of multi-step methods are mostly the higher derivatives of the solution, but these are zero for this equation with quadratic polynomials as solutions.

So what you are essentially seeing in the order estimates is the propagation of the 5th order local error of the first two RK4 steps used for the initialization of the method. That is, the method error behaves now like $h^5$, so that the cross-over with the floating point error $\mu h^{-1}$ is at about $h=\sqrt[6]\mu\sim 2^{-8.5}$, which is about what your data series shows.


Try a different example, $y'(t)=-10(\sin(y(t))-\sin(\cos(t)))-\sin(t)$ with exact solution $y=\cos(t)$. Then the working range starts at about $h=2^{-7}$, for larger step sizes the leading error term is not dominant enough to determine the order. I get for $h=2^{-k-6}$, $k=1,2,...,9$, the errors and estimated orders

2.0842e-08   2.417e-09  2.9028e-10  3.5547e-11  4.3969e-12  5.4867e-13  7.0499e-14  1.8208e-14  1.2323e-14
        3.1082      3.0576      3.0297      3.0152      3.0025      2.9603      1.9531     0.56314

This exhibits the order 3 in the working range of step sizes and a break down in order due to floating point noise accumulation as expected at around $h=2^{-13}$, in the positions $k=7,8,9$.

This change of the origin of the error can also nicely be seen in plots of the error profiles, where the difference to the exact solution is divided by $h^3$ to get a common scale.

enter image description here

The first 6 plots are almost identical, showing the dominance of the leading error term, while the last 3 show the increasing influence of the pseudo-random floating-point error accumulation.

Related Question