I want to get an intuitive understanding of Heun's method and why it has order of consistency 2, i.e. the error lies is in $O(h^{3})$ if $h$ is the stepwidth., I have read it everywhere, my professor says it without proving it. All the resources I have seen so far on the web either omit the proof as trivial, and the only one that does the proof does it wrong. I am at a loss. Can someone explain it to me in as much detail as possible?
[Math] Heun’s method: Why is it of second order
numerical methodsordinary differential equationsrunge-kutta-methods
Related Solutions
Looking at your expression again, this still isn't the correct implementation of Heun; check Wiki! You calculate $\tilde x_{n+1}= x_n+h\lambda x_n$ and then $x_{n+1} = x_n+h\lambda/2(x_n+\tilde x_{n+1})$. This is not implicit but explicit so there's no need to solve. You should find that $x_{n+1}$ is simply the first two terms in the Taylor series on expansion.
On theoretical lower accuracy bounds for order 2 methods
Heun is a second order method, that means that the global error is of second order and the local discretization error is of size $$ |y_n(t_n+h)-y_n(t_n)-h·\Phi_f(t_n,y_n(t_n),h)|=C\cdot h^3, $$ where $y_n(t)$ solves the IVP $y_n'=f(t,y_n)$, $y_n(t_n)=y_n$. To be measurable, this must not exceed the machine precision $\mu\sim 10^{-16}$, so $h\sim 10^{-5}$ is the lowest step size where the order behaves numerically as theoretically.
The estimate of the global error has the form and size $$ C·T·h^2+\frac{D·T·\mu}{h}\ge 3·T·\left(\frac14·C·(D·μ)^2\right)^{\frac13}, \qquad\text{"=“ for }h=\left(\frac{D·\mu}{2·C}\right)^{\frac13}. $$ The second term stands for the accumulation of floating point errors of about $D·μ$ in each step and over $N=T/h$ steps (a more correct factor instead of $N$ for longer time intervals is $(e^{LT}-1)/(Lh)$ with the Lipschitz constant $L$).
This is a convex function in $h$ with a minimum and thus gives the limit for any realistic tolerance prescription. Assuming the constants $C,D,T$ all have the magnitude $1$, the lower limit is about $tol=10^{-10}$ with a step size of again $h=10^{-5}$. With a prescription of $tol=10^{-12}$, the number $N=10^6$ of steps is guaranteed to lead to an accumulation of floating point errors of size $10^{-10}$ at least, thus preventing to reach this tolerance.
On ways to have a correct adaptive scheme
The following corrects your python code to a version where $h$ is fixed for the full integration interval and adapted over several integration runs so that the relative error in the end is between tol/4 and tol. This is the closest variant to your code, but probably not what was originally intended. The original intend may have been to adapt the value of $h$ for each Heun step, but one would have to more radically change your code to achieve that.
In iterating once for step size $h$ giving the value $y_1$ and once with $h/$ to get $y_2$, one gets $y_1=y^*+C·h^2+O(h^3)$ and $y_2=y^*+\frac14C·h^2+O(h^3)$. This can be used to compute better approximations for the error term and the exact value $$ y^*=\frac{4·y_2-y_1}3+O(h^3),\quad y_1-y^*=\frac43(y_1-y_2)+O(h^3),\quad y_2-y^*=\frac13(y_2-y_1)+O(h^3) $$
import math
def f(x,y):
return math.sin(x+y)
''' h is the step size '''
def Heun(f,x,y,h,end):
while x < end:
# floating point error may make x over- or undershoot end
if x+h>=end:
h=end-x
k0=f(x,y)
k1=f(x+h,y+h*k0)
x+=h
y+=h*0.5*(k0+k1)
return y
def AdaptDiff(diffF,f,x,y,h,end,tol):
# print debugging information on the recursion level
print "called with h=",h
if abs(1.-x/end) < tol:
return y
# integrate the full interval,
# once with step size h and once with h/2
y1=diffF(f,x,y,h,end)
y2=diffF(f,x,y,h/2.0,end)
# print debugging information on the approximations found
print "y1=%.15f, y2=%.15f, y1-y2=%.4e, y*=%.15f" % ( y1, y2, y1-y2, (4*y2-y1)/3 );
# if relative error is too large, decrease step size
err = abs(1.-y1/y2)/3;
if err > tol:
while err > tol/2:
h /= 2.; err /= 4.
return AdaptDiff(diffF,f,x,y,h,end,tol)
# if relative error is far too small, increase step size
# but this will rarely happen
if err < tol/64:
return AdaptDiff(diffF,f,x,y,h*4.,end,tol)
# return the last computed value, might also return y2
return (4*y2-y1)/3
x0=0; y0=1; xe=1;
tol = 1e-10; h=(tol/(xe-x0))**0.5;
print "returned %.16f" % AdaptDiff(Heun,f,x0,y0,h,xe,tol)
This recursion finishes in one step, i.e., with two integration runs with step sizes h= 1e-05
and h/2= 5e-06
for the objective tol=1e-10
:
called with h= 1e-05
y1=1.801069211924622, y2=1.801069211940325, y1-y2=-1.5703e-11, y*=1.801069211945559
returned 1.8010692119455591
Best Answer
I am assuming you mean the explicit method given by \begin{align} &k_1 = f(t_n, y_n) \\ &k_2 = f(t_n + h, y_n + hk_1) \\ &y_{n + 1} = y_n + \frac{h}{2}\left(k_1 + k_2\right) \end{align} To analyze truncation error, assume $y_n = y(t_n)$. From Euler's method we know $$y_n + hk_1 = y(t_n + h) + O(h^2)$$ Therefore by Taylor expanding the y argument to the first degree $$k_2 = f(t_n + h, y(t_n + h) + O(h^2)) = f(t_n + h, y(t_n + h)) + O(h^2)$$ Hence Heun's method is almost the trapezoid method for integration applied to $f$ to approximate $y(t_n + h) - y_n = \int_{t_n}^{t_n + h}f(t, y(t))\,dt$ except that it differs from it by $\frac{h}{2}O(h^2) = O(h^3)$. The trapezoid method is known to be $O(h^3)$ locally, so the error of Heun's method is $O(h^3)$ since it differs from the trapezoid method by $O(h^3)$.
Alternatively, Taylor expanding Heun's method and comparing it to the Taylor expansion of $y(t_n + h)$ will give you the result. This strategy can give the conditions for a Runge Kutta method to achieve a desired order.
For example for two stage methods of order 2:
Assume a method of the form \begin{align} &k_1 = f(t_n, y_n) \\ &k_2 = f(t_n + c_2h, y_n + ha_{21}k_1) \\ &y_{n + 1} = y_n + h(b_1k_1 + b_2k_2) \\ \end{align} Then Taylor expanding $k_2$ to $O(h^2)$ gives \begin{align} k_2 &= f + f_{t}c_2h + f_{y}ha_{21}k_1 + O(h^2) \\ &= f + (c_{2}f_{t} + a_{21}f_{y}f)h + O(h^2). \end{align} where the arguments $(t_n, y_n)$ are omitted. Hence \begin{align} y_{n + 1} = y_{n} + (b_1 + b_2)fh + (b_{2}c_{2}f_{t} + b_{2}a_{21}f_{y}f)h^2 + O(h^3). \end{align} On the other hand, \begin{align} y(t_n + h) = y_{n} + y'(t_n)h + \frac{y''(t_n)}{2}h^2 + O(h^3). \end{align} Since $y'(t) = f(t, y(t))$, we get \begin{align} &y'(t_n) = f \\ &y''(t_n) = f_t \cdot 1 + f_yy' = f_t + f_{y}f. \end{align} Therefore $$y(t_n + h) = y_n + fh + \frac{f_t + f_yf}{2}h^2 + O(h^3)$$ Matching terms gives \begin{align} &b_1 + b_2 = 1 \\ &b_2c_2 = \frac{1}{2} \\ &b_2a_{21} = \frac{1}{2}. \end{align} Heun's method is $c_2 = 1$, $b_1 = b_2 = \frac{1}{2}$, $a_{21} = 1$.