[Math] How to implement the adaptive Heun’s method

numerical methodsordinary differential equations

I'm trying to implement code for Heun's method function in python. But I'm also doing it in the adaptive style. Regularly for say rectangle method, if you do adaptive style, you compare the area from a to b, with the sum of a to the center of a and b and then from that center to b. If its the same with a tolerance, then return the sum, if not, then recurse on both intervals.

This is what I got so far:

import math

def k(x,y):
    return math.sin(x+y)

''' h is the step size '''
def Heun(f,x,y,h,end):
    while x < end:
        f0=f(x,y)
        z=y+h*f0
        x+=h
        f1=f(x,z)
        y+=h*0.5*(f0+f1)
    return y 

def AdaptDiff(diffF,f,x,y,h,end,tol):
    if abs(1.-x/end) < tol:
        return y
    y1=diffF(f,x,y,1,x+h)
    y_=diffF(f,x,y,1,x+h/2.)
    y2=diffF(f,x+h/2.,y_,1,x+h)
    if abs(1.-y1/y2) > tol:
        return AdaptDiff(diffF,f,x+h/2.,y2,h/2.,end,tol)
    return AdaptDiff(diffF,f,x+h,y1,h*2.,end,tol)

print AdaptDiff(Heun,k,0,1,1/2.,1,1e-10)

But I'm getting a maximum recursion depth exceeded error. Does anyone know whats wrong here? I should be getting 1.73513792333.

Thanks.

Best Answer

On theoretical lower accuracy bounds for order 2 methods

Heun is a second order method, that means that the global error is of second order and the local discretization error is of size $$ |y_n(t_n+h)-y_n(t_n)-h·\Phi_f(t_n,y_n(t_n),h)|=C\cdot h^3, $$ where $y_n(t)$ solves the IVP $y_n'=f(t,y_n)$, $y_n(t_n)=y_n$. To be measurable, this must not exceed the machine precision $\mu\sim 10^{-16}$, so $h\sim 10^{-5}$ is the lowest step size where the order behaves numerically as theoretically.

The estimate of the global error has the form and size $$ C·T·h^2+\frac{D·T·\mu}{h}\ge 3·T·\left(\frac14·C·(D·μ)^2\right)^{\frac13}, \qquad\text{"=“ for }h=\left(\frac{D·\mu}{2·C}\right)^{\frac13}. $$ The second term stands for the accumulation of floating point errors of about $D·μ$ in each step and over $N=T/h$ steps (a more correct factor instead of $N$ for longer time intervals is $(e^{LT}-1)/(Lh)$ with the Lipschitz constant $L$).

This is a convex function in $h$ with a minimum and thus gives the limit for any realistic tolerance prescription. Assuming the constants $C,D,T$ all have the magnitude $1$, the lower limit is about $tol=10^{-10}$ with a step size of again $h=10^{-5}$. With a prescription of $tol=10^{-12}$, the number $N=10^6$ of steps is guaranteed to lead to an accumulation of floating point errors of size $10^{-10}$ at least, thus preventing to reach this tolerance.


On ways to have a correct adaptive scheme

The following corrects your python code to a version where $h$ is fixed for the full integration interval and adapted over several integration runs so that the relative error in the end is between tol/4 and tol. This is the closest variant to your code, but probably not what was originally intended. The original intend may have been to adapt the value of $h$ for each Heun step, but one would have to more radically change your code to achieve that.

In iterating once for step size $h$ giving the value $y_1$ and once with $h/$ to get $y_2$, one gets $y_1=y^*+C·h^2+O(h^3)$ and $y_2=y^*+\frac14C·h^2+O(h^3)$. This can be used to compute better approximations for the error term and the exact value $$ y^*=\frac{4·y_2-y_1}3+O(h^3),\quad y_1-y^*=\frac43(y_1-y_2)+O(h^3),\quad y_2-y^*=\frac13(y_2-y_1)+O(h^3) $$

import math

def f(x,y):
    return math.sin(x+y)

''' h is the step size '''
def Heun(f,x,y,h,end):
    while x < end:
        # floating point error may make x over- or undershoot end
        if x+h>=end:
            h=end-x
        k0=f(x,y)
        k1=f(x+h,y+h*k0)
        x+=h
        y+=h*0.5*(k0+k1)
    return y 

def AdaptDiff(diffF,f,x,y,h,end,tol):
    # print debugging information on the recursion level
    print "called with h=",h
    if abs(1.-x/end) < tol:
        return y
    # integrate the full interval, 
    # once with step size h and once with h/2
    y1=diffF(f,x,y,h,end)
    y2=diffF(f,x,y,h/2.0,end)
    # print debugging information on the approximations found
    print "y1=%.15f, y2=%.15f, y1-y2=%.4e, y*=%.15f" % ( y1, y2, y1-y2, (4*y2-y1)/3 );
    # if relative error is too large, decrease step size
    err = abs(1.-y1/y2)/3;
    if err > tol:
        while err > tol/2: 
            h   /= 2.; err /= 4.
        return AdaptDiff(diffF,f,x,y,h,end,tol)
    # if relative error is far too small, increase step size
    # but this will rarely happen
    if err < tol/64:
        return AdaptDiff(diffF,f,x,y,h*4.,end,tol)
    # return the last computed value, might also return y2
    return (4*y2-y1)/3

x0=0; y0=1; xe=1; 
tol = 1e-10; h=(tol/(xe-x0))**0.5;

print "returned %.16f" % AdaptDiff(Heun,f,x0,y0,h,xe,tol)

This recursion finishes in one step, i.e., with two integration runs with step sizes h= 1e-05 and h/2= 5e-06 for the objective tol=1e-10:

called with h= 1e-05
y1=1.801069211924622, y2=1.801069211940325, y1-y2=-1.5703e-11, y*=1.801069211945559
returned 1.8010692119455591
Related Question