Solve general 2nd order ODE numerically with 2nd order time-differences

finite differencesnumerical methodsordinary differential equations

I want to solve a 2nd order ODE of the following general form

$\ddot{x} = f(\dot{x}, x)$

where dot indicates a time-derivative. A simple numerical solution that is first-order accurate in time would be

$
\begin{eqnarray}
x(t + \Delta t) &=& x(t) + v(t)\Delta t \\
v(t + \Delta t) &=& v(t) + f(v(t),x(t))\Delta t \\
\end{eqnarray}
$

However, the accuracy of this solution is unsatisfactory. I would like a solution that is 2nd order accurate in time. Naively, I apply 2nd order central differences for 1st and 2nd order derivatives

$\frac{x(t+\Delta t) – 2 x(t) + x(t-\Delta t)}{\Delta t^2} = f\biggl(\frac{x(t+\Delta t) – x(t-\Delta t)}{2\Delta t}, x(t)\biggr)$

In order to numerically solve this equation, I need to express future as a function of the past, that is, solve the above equation for $x(t+\Delta t)$. However, for a general $f$ there might not be a way to solve the above equation explicitly.

Question: Is there a way to solve the above ODE numerically to 2nd order precision, if the only thing we are allowed to do with the RHS is finding its value for given inputs.

Note: For the initial conditions, $x$ and $v$ are known at $t=0$. If the proposed method requires more than 2 steps of memory, please indicate how to correctly initialize it.

Best Answer

You could just iterate $$ x^{[m+1]}(t+Δt)=2x(t)-x(t−Δt)+Δt^2f\left(\frac{x^{[m]}(t+Δt)−x(t−Δt)}{2Δt},x(t)\right) $$ starting with a simple extrapolation $x^{[0]}(t+Δt)=2x(t)-x(t−Δt)$. Usually with the first or second iteration you should have reached sufficient accuracy.

For any twice differentiable functions you get $x(t+Δt)−2x(t)+x(t−Δt)=O(Δt^2)$, so that the error of the iteration is $O(Δt^2(LΔt/2)^k)$ for $k$ iterations, where $L$ is a Lipschitz constant of $f$ relative to its first argument. As the expected truncation error of the method is $O(Δt^4)$, $k=2$ iterations should be sufficient, all further iterations just reduce and regularize the error coefficient.

You get convergence to the solution closest to the initial estimate by some contraction argument like the Banach fixed-point theorem as long as for the combined Lipschitz constant of the iteration $LΔt/2<1$.


For faster or more robust convergence, you would need to implement a Newton-like method.

Related Question