How to interpret the error graph

numerical methodspythonrounding errortruncation error

I wrote an RK4 algorithm and am testing it on $y' = -ty$ which has the solution
$y(t) = e^{-t^2 / 2}$

I decided to also graph the error, which I am now trying to decipher. I plotted the solution and error for several time steps. I halved the time step for each test:

$\Delta t = 0.1$ :
Delta T = 0.1

$\Delta t = 0.05$ :
enter image description here

$\Delta t = 0.025$ :
enter image description here

In general, my questions are:

How can the error in my algorithm be analyzed? And, why does that first graph look so funny?

As far as the work I've done:

I am conceptually aware of the following jargon and roughly what it means, however the class I took in numerical methods was run by a genius, not a communicator:

  • "truncation error"
  • "round-off error"
  • RK4 is "fourth order" and so error should drop like $(\Delta t)^4$

My best guess is that error increases with each time step due to round-off. However, I have no idea how to assure myself that this is true.

The only thing I knew to do was to check that error is proportional to $(\Delta t)^4$. It sure seems like the error isn't dropping that fast. I found the maximum error for the first test:

$E_{\Delta t} = c(0.1)^4 = 0.4321$

And found c = 4321, and applied it to

$E_{\Delta t /2} = 0.0930$

but $4321(0.05)^4 = 0.027$, which is roughly a third of the error that I got. I remember my professor mentioning "order of magnitude" a lot. I guess they are within an "order of magnitude", so does that mean everything is good here?

def rk4(dt, t, field, y_n):

    k1 = dt * field(t, y_n)
    k2 = dt * field(t + 0.5 * dt, y_n + 0.5 * k1)
    k3 = dt * field(t + 0.5 * dt, y_n + 0.5 * k2)
    k4 = dt * field(t + 0.5 * dt, y_n + k3)

    return y_n + (k1 + 2 * k2 + 2 * k3 + k4) / 6

if __name__ == '__main__':

    # the ODE y' = -t * y, which has solution y = exp(-t^2 / 2)
    def field(t, vect):
        return np.array([-t * vect])


    # Set the interval over which we want a solution.
    t_0 = -10
    t_n = 10
    dt = .05

    # Determine number of steps in accordance with mesh size
    steps = int((t_n - t_0) / dt)
    time = np.linspace(t_0, t_n, steps, endpoint=False)
    # time = np.arange(t_0, t_n, dt)

    # Initialize solution vectors and error collection
    x = np.zeros(steps)
    error = np.zeros(steps)
    x[0] = 1.928749848e-22
    error[0] = 0

    for i in range(1, steps):
        x[i] = rk.rk4(dt, time[i-1], field, x[i-1])
        error[i] = abs(x[i] - math.pow(math.e, (-time[i] ** 2) / 2)) / math.pow(math.e, (-time[i] ** 2) / 2)

Best Answer

Implementing the RK4 method as

def RK4integrate(f,t,y0):
    y = np.asarray(len(t)*[y0]);
    for i in range(len(t)-1):
        h = t[i+1]-t[i];
        k1=h*f(t[i],y[i]);
        k2=h*f(t[i]+0.5*h,y[i]+0.5*k1);
        k3=h*f(t[i]+0.5*h,y[i]+0.5*k2);
        k4=h*f(t[i+1],y[i]+k3);
        y[i+1,:]=y[i]+(k1+2*k2+2*k3+k4)/6;
    return y

and producing a combined plot of solution graphs and error profiles for the relative error divided by the expected scale $h^4$ by

def p(t): return np.exp(-t**2/2)
def odefunc(t,x): return -t*x 


fig, ax = plt.subplots(2,1,figsize=(12,10))
t0, tmax=-10, 10
for h in [0.1, 0.05, 0.025, 0.01, 0.005 ][::-1]:
    t = np.arange(t0,tmax,h);
    y = RK4integrate(odefunc, t, np.array([p(t[0])]));
    ax[0].plot(t,y[:,0],'-o', ms=1+13*h, label="h=%.3g"%h);
    ax[1].plot(t,(y[:,0]/p(t)-1)/h**4,'-o', ms=1+16*h, label="h=%.3g"%h);
for gr in ax: gr.grid(); gr.legend();
plt.show();

produces the plot

plot of numerical solutions and error profiles

where the convergence of the error profiles shows clearly that the method has order 4 and that the transition from $e^{-50}$ at $t_0=-10$ to the value $1$ at $t=0$ does produce a relatively benign relative error of about $1500\,h^4$ at $t=0$.

Additionally, with smaller step sizes the error profile more and more reflects the symmetry of the problem, meaning that the errors at $t>0$ have the opposite sign but about the same size as the error at $-t$ so that they compensate. This means that the error coefficient at $t=10$ is zero for $h^4$ and what can be seen is the $h^5$ term, accounting for the halving in the scaled relative error at each step size halving.

  h      relative error          scaled rel. error
----------------------------------------------------
0.005   5.9285699682831705e-08    94.85711949253073 
0.01    1.8950046616339478e-06    189.50046616339478 
0.025   0.00018492185995810928    473.39996149275964 
0.05    0.005975343139402733      956.0549023044372 
0.1     0.21902043404195348      2190.204340419534 

Away from that point the errors behave as expected for a fourth order method.


Conclusion: Your observed error curve is not reproducible.

Related Question