Sufficient conditions for bounded output of an explicit linear multi-step method

numerical methodsordinary differential equationsstability-in-odesstability-theory

The system:
\begin{align}
\dot{x}(t) &= f(x(t))\\
x(t_0) &= x_0
\end{align}

is being solved by an explicit linear multi-step method (assume perfect initialization):
\begin{equation}
\widetilde{x}_{n+s} = \sum_{j=0}^{s-1}a_j\widetilde{x}_{n+j} + h\sum_{j=0}^{s-1}b_jf(\widetilde{x}_{n+j})
\end{equation}

What are sufficient conditions that there exist constants $C, H > 0$ such that for all $h \in [0, H)$ the numerical solution of the method is bounded
\begin{equation}
\widetilde{x}_n < C
\end{equation}

in the interval $t_0 \leqslant t_0 + nh \leqslant t_{end}$.


Assumption:

The above method is zero-stable if the roots of the polynomial:
\begin{equation}
\psi(u) = u^s -\sum_{j=0}^{s-1}a_ju^j
\end{equation}

are within a closed unit disk and the roots on the unit circle are simple.
Zero-stability is a sufficient condition for the above method to give a bounded input in the sense stated above.


My thoughts:

Zero-stability is a necessary condition for the above system to be convergent (see e.g. Stoer, Bulirsch – Introduction to Numerical Analysis (2002)).
This can be shown by choosing an example system $\dot{x} = 0$. The same proof can be made to show that zero-stability is a necessary for the existence of the above bounds.

Zero-stability and consistency of a method are sufficient conditions for convergence of that method.
The solution $x(t)$ is differentiable on the whole interval.
Differentiable functions are bounded on the closed interval.
If the method is convergent than it has a bounded error for infinitesimally small step sizes.
This means convergence is sufficient for the bounded output of the method.

Can this condition be relaxed?
My intuition tells me that zero-stability without consistency is enough for the method to give a bounded output.
Is this intuition correct and can it be proven?

Best Answer

I am going to try to answer my own question. Zero-stability and Lipschitz continuity of $f : U \rightarrow \mathbb{R}$ are enough to prove the existence of the bounds.


The method described in the question can be described with the help of a companion matrix: \begin{equation} \left[\begin{array}{c} \widetilde{x}_{n+s}\\ \widetilde{x}_{n+s-1}\\ \vdots\\ \widetilde{x}_{n+2}\\ \widetilde{x}_{n+1} \end{array}\right] = \left[\begin{array}{ccccc} a_{s-1} & a_{s-2} & \dots & a_1 & a_{0}\\ 1 & 0 & \dots & 0 & 0 \\ 0 & 1 & \ddots& \vdots & 0 \\ \vdots & \ddots & \ddots& 0 &\vdots\\ 0 & \dots & 0 & 1 & 0 \end{array}\right] \left[\begin{array}{c} \widetilde{x}_{n+s-1}\\ \widetilde{x}_{n+s-2}\\ \vdots\\ \widetilde{x}_{n+1}\\ \widetilde{x}_{n} \end{array}\right] + h \left[\begin{array}{ccccc} b_{s-1} & b_{s-2} & \dots & b_1 & b_{0}\\ 0 & 0 & \dots & 0 & 0 \\ 0 & 0 & \ddots& \vdots & 0 \\ \vdots & \ddots & \ddots& 0 &\vdots\\ 0 & \dots & 0 & 0 & 0 \end{array}\right] \left[\begin{array}{c} f(\widetilde{x}_{n+s-1})\\ f(\widetilde{x}_{n+s-2})\\ \vdots\\ f(\widetilde{x}_{n+1})\\ f(\widetilde{x}_{n}) \end{array}\right] \end{equation} or using a matrix notation: \begin{equation} \widetilde{\mathbf{x}}_{n} = \mathbf{A}\widetilde{\mathbf{x}}_{n-1} + h\mathbf{B}\mathbf{f}(\widetilde{\mathbf{x}}_{n-1}) \end{equation} Lipschitz continuity of a function implies that the function has a linear growth: \begin{equation} \lvert f(\widetilde{x})\rvert \leqslant M + L \lvert \widetilde{x}\rvert \end{equation} where $M, L >0$. This means that the previous difference equation can be rewritten to: \begin{equation} \lVert\widetilde{\mathbf{x}}_{n}\rVert \leqslant \lVert\mathbf{A}\rVert\lVert\widetilde{\mathbf{x}}_{n-1}\rVert + h\lVert\mathbf{B}\rVert(M\mathbf{1} + L\lVert\widetilde{\mathbf{x}}_{n-1}\rVert) \end{equation} A consequence of proposition 4. in this answer for a zero-stable method there exists a norm for which: \begin{equation} \lVert\mathbf{A}\rVert = \rho(\mathbf{A}) \leqslant 1 \end{equation} After introducing the previous inequality and substitution of constants $C = \lVert\mathbf{B}\rVert L$ and $D = \lVert\mathbf{B}\rVert M\lVert\mathbf{1}\rVert$ the following inequality is obtained: \begin{equation} \lVert\widetilde{\mathbf{x}}_{n}\rVert \leqslant (1 + hC)\lVert\widetilde{\mathbf{x}}_{n-1}\rVert + hD \end{equation} By the use Taylor series of the exponential function and the use of induction: \begin{equation} \lVert\widetilde{\mathbf{x}}_{n}\rVert \leqslant e^{Ch}\lVert\widetilde{\mathbf{x}}_{n-1}\rVert + hD \leqslant e^{nCh}\lVert\widetilde{\mathbf{x}}_0\rVert + hD \sum_{k=0}^{n-1} e^{kCh} \leqslant e^{nCh}\lVert\widetilde{\mathbf{x}}_0\rVert + nhD e^{nCh} \end{equation} Since the number of of samples on the closed interval is $n = \frac{t_{end}-t_0}{h}$ (round-off is ignored for simplicity): \begin{equation} \lVert\widetilde{\mathbf{x}}_{n}\rVert \leqslant e^{C(t_{end}-t_0)}\lVert\widetilde{\mathbf{x}}_0\rVert + D(t_{end}-t_0)e^{C(t_{end}-t_0)} \end{equation} Since there exists a constant for which $\lVert\widetilde{\mathbf{x}}_{n}\rVert$ the statement is proved.

Related Question