Absolute stability of numerical methods for ODEs

numerical methodsordinary differential equationsrunge-kutta-methodsstability-in-odes

I've troubles understanding the meaning of region of absolute stability for numerical methods for ODEs. I know that we can restric the study of stability of a certain method to the case of the test equation:
$$ \begin{cases}
y'(x) &= \lambda y(x) \\
y(0) &= 1 \\
\end{cases} $$

and that for Runge-Kutta methods the approximations $ \{ y_{n} \} $ will tend to $0$ for $n \rightarrow \infty$ if and only if the roots of the characteristic equation (with h standing for the lenght of the step) $$ \sum_{i}^{k}(\alpha_{i} -\beta_{i}h\lambda)t^{k-i} $$ have norm $<1$. The region of stability for the trapezoidal rule method is $\{ h\lambda \in \mathbb{C} : \mathbf{Re}(h\lambda) <0 \}$ and so I would expect the method to be unstable for any choice of $\lambda >0$, but in fact I've tried it on mathlab with the test equation using $\lambda = 1$ and the approximations $\{ y_{n}\} \rightarrow 0 $ for $n \rightarrow \infty$.
Thanks so much for everyone's help. Much appreciated.

Best Answer

Stability relates only to the fact that ODE with exact solutions that converge (in some sense) or get trapped in some bounded region see this behavior replicated in the numerical solutions. ODE solutions that do not show this behavior are of no interest for the question of stability.

A first quantitative measure for this general idea can be obtained for linear test systems, where the stability can be characterized by the product of eigenvalue and step size falling into some stability region in the complex plane. This leads to the A-stability.


Note that there is a difference between stability and convergence.

  • Convergence is concerned with what happens if the step size goes to zero. For multi-step methods consistency and zero-stability guarantees that.
  • Stability is concerned with how (moderately) large one can make the step size and still get a result that is qualitatively and somewhat quantitatively correct.

You get some interplay of these concepts in numerical solvers with adaptive step size. If the system approaches an equilibrium, then at some point the error tolerances are satisfied if the solution were to stay constant. This would mean that the step size could be arbitrarily large. But in fact it is restricted by the stability region of the method, and the largest eigenvalue of the Jacobian at the stationary point. This has practically the consequence that for a large range of error tolerances the method produces the same final step sizes.

Related Question