[Math] Step size in Euler’s forward method

numerical methodsordinary differential equations

I came across the following question. Kindly let me know if there is any generic solution to this type of question.

$$
\frac{d^2 y}{dt^2} + 3 \frac{dy}{dt} + 2y = f(t)
$$

Where $f(t)$ is an impulse function. What is a suitable step size for Euler forward difference method.

  1. 2
  2. 1.5
  3. 1
  4. 0.2

Best Answer

General rule on step size

Yes, there is a "generic type" limit on the size of the time step. It is related to the stability of the method. Stability means that if the solution or one of its components (or a linear combination of them) converges in the exact solution, then this should also happen in the numerical solution. This does not guarantee that the solution is good in an accuracy sense, only that it is not fundamentally wrong.

In the Euler method, to avoid the more radically wrong qualitative behavior, you need to satisfy $$ |1+\lambda h|<1 $$ for all eigenvalues of the Jacobian of the order 1 formulation with negative real part. A weakened, i.e., not sufficient, condition for not completely useless step sizes is $$ Lh<2 \qquad (\text{better use }Lh<1.5) $$ where $L$ is a Lipschitz constant of the first order ODE system.

Applied to the given equation

The first order formulation of this ODE has a constant matrix with the same characteristic polynomial as the 2nd order ODE and thus eigenvalues $λ_1=-1$ and $λ_2=-2$, resulting in the condition $h<1$.

If one uses a semi-factored first order system with $v=y'+y$, then \begin{align} y'&=v-y\\ v'&=-2v \end{align} has Lipschitz constant $L=2$ in the maximum norm. Only $h=0.2$ is not disqualified from the stability condition in both versions.

Interpretation of the distributional right side as initial conditions

Interpreting that $f=\delta$ is the Dirac delta "function", with the usual assumption of $y(t)=0$ for $t<0$, this means that $y'$ has a jump from $0$ to $1$ at $t=0$. Thus one gets the initial conditions (in the right-sided limits) $y(0^+)=0$, $y'(0^+)=1$ for the solution for $t>0$. The exact solution there is $$y(t)=e^{-t}-e^{-2t}.$$

Numerical experiment

It is easy to perform the numerical experiment using any scripting or programming language. Here I set this up in a python script to compare the exact solution to the numerical solutions with the given step sizes

# set up the equation
def odesys(u): y,v = u; return np.array([ v, -3*v-2*y]);
# plot the exact solution
t = np.arange(0,10,0.1);
plt.plot(t, np.exp(-t)-np.exp(-2*t), color='lightgray', lw=4);
# Euler method for the 4 step sizes
for h in [ 2, 1.5, 1, 0.2]:
    t = np.arange(0,10,h);
    # initial conditions
    u = [ np.array([0.0,1.0]) ];
    # Perform Euler steps according to the time array 
    for s in t[:-1]: u.append( u[-1]+h*odesys(u[-1]) );
    # plot the solution
    u = np.asarray(u);
    plt.plot(t, u[:,0], '-o', ms=2+2*h);
plt.ylim(-2.5,2.5); plt.grid(); plt.show()

solution plots

As can be seen, the solution for $h=1$ stays barely bounded, while $h=0.2$ slightly overshoots the exact solution, but correctly falls to zero.


Addendum: General remark on effort vs. order

Remember that the Euler methods are order $1$. As example, to get an error of magnitude $10^{-4}$ over a time interval of length $1$ would require $10000$ steps/function evaluations.

With an order $2$ method, Heun or improved Euler, one would need $100$ steps with $200$ function evaluations.

With an order $4$ method like the classical Runge-Kutta, one would need for the same error magnitude $10$ steps with $40$ function evaluations.

Thus the advantage of higher order methods. And why the question about the Euler method is purely academic.

Related Question