Implicit numerical ODE solvers: still unconditionally stable if Newton iterations limited

implicit functionnumerical methodsordinary differential equationsrunge-kutta-methodssimulation

Implicit ODE solvers like Backward Euler are often described as unconditionally stable, which I believe means that the solution never blows up regardless of how large the solver step size is chosen.

But implicit solvers like Backward Euler rely on iterative numerical methods like Newton-Raphson, which can in some cases take very many iterations to properly converge on a solution, and which could easily fail, e.g., if the solution is near a discontinuity.

When implicit solvers are described as unconditionally stable, is this assuming that an unlimited number of iterations is allowed and that the solution is in a well-behaved region (e.g., without discontinuities)?

More specifically, if the number of Newton iterations is fixed to 2 (meaning that two iterations are always done at each time step and then the solver moves on regardless of the error remaining in the solution), does Backward Euler remain unconditionally stable?

If the equation system happens to be challenging so that 70 iterations are required for accurate solution, say, is the solution still guaranteed to not blow up with a limit of 2 iterations?

I know that accuracy would likely suffer if more than more than 2 iterations are required, but here I'm only interested in whether the solution blows up by virtue of limiting the number of iterations so severely.

Many thanks for entertaining this non-mathematician's question and for sharing an answer if you have one!

Best Answer

The idea of stability is that if an exact ODE solution converges (quickly, transiently) towards a stable configuration (that might be slowly changing) then the numerical solution does the same and stays converged in this sense. This can and will depend on the size of the numerical steps.

This general idea is not easy to translate into formal terms. As always it is easiest to apply to linear systems. As these decouple via diagonalization into scalar complex-valued equations, it is sufficient to consider these. As these now scale linearly with the time scale, stability conditions can be expressed in $z=\lambda h$ for $y'= λy$. Such results can be applied locally to non-linear systems. Step sizes that contradict the locality may invalidate the stability properties.

Yes, when discussing stability it is assumed that the implicit step equation is solved exactly.

"Never blows up" is for situations where the exact solution stays bounded in some neighborhood of the current base point. If the exact solution has a divergence, then numerical methods tend to follow that, even if not at the exact same time.

Arbitrarily large step sizes only apply to linear equations and systems, which also are the defining case for A-stability. There are approaches to define stability in general terms also for non-linear equations.

Usually the step size control of the method will reduce the step size when the number of Newton iterations becomes too large. Note that for practicality the implicit solver is a simplified Newton method, often with an approximate Jacobian. This still allows to maintain an error reduction by a factor $O(h^2)$, so if you start with a good predictor (for instance extrapolation from the last step), the first Newton step should reach the method order in the error order of the implicit equation. With the second Newton step the implicit error should be very small against the method step error, which indeed often is good enough. However, this only applies to small-dimensional medium hard problems.

Doing a fixed number of Newton steps makes the method into an explicit method with a bounded stability region. To retain the character of an implicit method you need to control the convergence of the implicit solver. But 70 Newton steps is likely too large. Chances are that with half the step size the combined number of Newton iteration for the now two steps is much smaller.

Related Question