[Math] Numerical solution to a system of second order differential equations

numerical methodsordinary differential equationsphysics

I'm writing a sort of physical simulator. I have $n$ bodies that move in a two dimensional space under the force of gravity (for instance it could be a simplified version of the solar system). Let's call $m_1, \dots, m_n$ their masses, $(x_1, y_1), \dots (x_n,y_n)$ their positions, $(vx_1, vy_1), \dots (vx_n,vy_n)$ their velocities and $(ax_1, ay_1), \dots (ax_n,ay_n)$ their accelerations.
Suppose we are given some initial conditions on the positions and velocities, and let $dt$ be a small amount of time. My goal is to compute the positions of all bodies at time $t_0+dt$, $t_0+2dt$, $\dots$ and so on, where $t_0$ is the initial time.

Here is what I have done:

  1. First I compute for all bodies the total force that acts at the current time on them, due to all other bodies. And so I compute the instantaneous acceleration for each body;
  2. Then I compute the new velocity for each body, by applying the formulas $vx_i \leftarrow vx_i + dt\cdot ax_i$ and $vy_i \leftarrow vy_i + dt\cdot ay_i$;
  3. Finally I compute the new position for each body, by applying the analogous formulas $x_i \leftarrow x_i + dt\cdot vx_i$ and $y_i \leftarrow y_i + dt\cdot vy_i$.

My solution in some sense remember me Euler method for solving differential equations. It is quite intuitive and simple, but rather accurate. What I am wondering is if there is a clever better method for solving the problem. For better I mean that, given a fixed $dt$, it can get closer to the exact solution by requiring equal or less computations.

For instance, if we interchange points 2. and 3., by experimentation, I noticed that we obtain a method which requires the same computations but is far less accurate (for it to be as accurate as the original method, we have to use a smaller step $dt$).

Best Answer

The second (worse) version of your algorithm resembles the explicit Euler scheme. It is consistent to first order only, and it does not preserve invariants of motion such as the total energy. In fact, the errors of the Euler scheme act as excitations, so the orbits of your planets would spiral outward over time.

Runge-Kutta methods are designed to be consistent with the system of differential equations to higher orders such as $4$. Yet they are not designed to preserve the underlying Hamiltonian structure, in particular they will cause the orbits of your planets to experience energy drift and therefore slowly spiral inward or outward over time. For simulations over longer time scales, you may want to avoid that.

In fact, your first integrator version does avoid energy drift. Its explicit step of updating the velocity from (the field at) the previous position is balanced by the formally implicit step of using the already-updated velocity to update the position. This is essentially an implementation of the leapfrog method, apart from the fact that the velocities thus calculated are interpreted more accurately as half-time samples, i.e. associated with time $\frac{t_i+t_{i+1}}{2}$ rather than with time $t_i$.

The Verlet integrator mentioned in wltrup's answer is essentially your leapfrog method with the velocities eliminated. To do that, the Verlet method needs to keep track of the previous two positions instead of just one position. (Verification is left as an exercise to the reader.)

The leapfrog and Verlet methods are symplectic integrators. These are designed to adhere to the Hamiltonian principles underlying the system dynamics, at least for problems with separable Hamiltonians, i.e. those that can be written as $$H(q,p) = T(p) + V(q)$$ where $q$ comprises all position coordinates, $p$ comprises the conjugate momenta, $T$ represents kinetic energy and $V$ potential energy.

Your problem does feature a separable Hamiltonian, so you may want to stick with symplectic integrators. That choice rules out the use of ordinary Runge-Kutta methods.

You may however want to to increase the order of consistency of your symplectic integrator. Then slight reductions of the time step would result in even larger gains in accuracy; or you could afford larger time steps for the same accuracy.

The Wikipedia entry on symplectic integrators lists higher-order symplectic integration methods found by Ruth and others (1, 2, 3). Those come with special (given, constant) coefficients which I will label $a_1,\ldots,a_n$ and $b_1,\ldots,b_n$. With these, one step for evolving the system from time $t$ to time $t+\delta t$ takes the form

  • for $i=1,\ldots,n$:

    1. Compute vector $F$ of all forces, given position vector $q$
    2. Change vector $p$ of all momenta by $b_i\,F\,\delta t$
    3. Compute vector $v$ of all velocities, given momentum vector $p$
    4. Change vector $q$ of all positions by $a_i\,v\,\delta t$

Your problem has point masses, so we can reasonably normalize each force and momentum by the mass of the point it is applied to. Thus forces turn into accelerations, momenta turn into velocities, and step 3 becomes a no-op. An integration step now looks like the following:

  • for $i=1,\ldots,n$:

    1. Compute vector $g$ of all accelerations, given position vector $q$
    2. Change vector $v$ of all velocities by $b_i\,g\,\delta t$
    3. Change vector $q$ of all positions by $a_i\,v\,\delta t$

The wisdom now lies in the coefficients. Your first-mentioned integrator has $n=1, a_1=1, b_1=1$. For an order-$4$ symplectic scheme, Ruth has found $n=4$ and $$\begin{bmatrix} a_1 & a_2 & a_3 & a_4 \\ b_1 & b_2 & b_3 & b_4 \end{bmatrix} = \frac{1}{2-\sqrt[3]{2}} \begin{bmatrix} \frac{1}{2} & \frac{1-\sqrt[3]{2}}{2} & \frac{1-\sqrt[3]{2}}{2} & \frac{1}{2} \\ 0 & 1 & -\sqrt[3]{2} & 1 \end{bmatrix}$$ If you compare that with the referenced Wikipedia article and wonder about the reversed numbering, remember that sequencing of operator application works from right to left. My numbering agrees with reference (3).

Note that $b_1=0$ which has the consequence that the first update of $p$ resp. $v$ is a no-op. Therefore, each full integration step effectively begins and ends with an update to $q$, and the values of $q$ outside the loop are not used for updating $p$ resp. $v$. If you are not interested in those particular values of $q$, you might as well lump $a_1$ and $a_4$ together and condense the full integration step to only three partial steps. The sequence of intermediate points would be the same, but the $q$ values outside the loop would be slightly time-shifted and give a less accurate approximation to the true position at time $t$, just as the leapfrog method has its velocities slightly time-shifted. With very coarse time steps, the noticeable effect would be that orbits would become slanted in phase space.

Update: I like to identify the leapfrog method with $(a_1, b_1) = (1, 1)$. As pointed out, that scheme produces interleaved time samples of positions $q$ and velocities $v$. To produce time-synchronized $(q, v)$ samples, one of the intermediate steps should be split in two half-steps, and the loop body rotated so that one of the two half steps occurs at the beginning and the other at the end. This is equivalent to the scheme $(a_1, a_2; b_1, b_2) = (\frac{1}{2},\frac{1}{2};0,1)$ or $(a_1, a_2; b_1, b_2) = (1,0;\frac{1}{2},\frac{1}{2})$, which Candy/Rozmus call leapfrog and pseudo-leapfrog, respectively. Though I'd rather emphasize the underlying equivalences, it is probably in order to follow LutzL's suggestion and name the $(a_1, b_1) = (1, 1)$ method symplectic Euler instead.

Related Question