[Math] Numerical instability using only Heun’s method on a simple PDE.

ap.analysis-of-pdesna.numerical-analysis

I'm trying to simulate the evolution of the Wigner function (a pseudo probability distribution over phase space) for a point particle moving in a chaotic potential. I'll provide background first, as well as some ignorant speculation at the end, but I've isolated my problem to a very simple case, so you should probably just skip to the "Reduced Problem".

Background

The PDE governing the Wigner function $W(x,p,t)$ can be approximated as

$\partial_t W = -\frac{p}{m} \partial_x W + V^\prime (x) \partial_p W – \frac{\hbar^2}{24} V^{\prime \prime \prime} (x) \partial_p^3 W$

or, in the dimensionless PDE notation,

$W_t = -p W_x + f(x) W_p – f^{\prime \prime}(x) W_{ppp} .$

However, the problem I'm having occurs for zero potential. In this case, the $p$-dependance of the Wigner function decouples from it's evolutions, so we can just fix $p$.

Reduced Problem

Consider merely the equation

$\partial_t W = – c \ \partial_x W$

for $W(x,t)$. For initial conditions $W(x,t=0) = g(x)$, it is solved by $W(x,t) = g(x-ct)$, which is just the original wave sliding at speed $c$.

Suppose my initial condition is a normal distribution: $g(x) \propto e^{-x^2}$. If I were to simulate this with just Euler's method,

$W (x_i,t_{j+1}) = W (x_i,t_j)+ \Delta W (x_i,t_{j+1}), $

$\Delta W (x_i,t_j+1) = -c [W(x_{i+1})-W(x_{i-1})]/(2 \Delta x),$

I would quickly run into a serious instability: On the leading and trailing edges of the traveling Gaussian (where the function is concave up) Euler's method will under-estimate the time derivative, yielding overly steep leading and trailing edges. Here's an animated gif of what it looks like. Eventually, the trailing edge dips below zero and all hell breaks loose.

Furthermore, this instability scales poorly: $t_{breakdown} \propto \Delta t$, so if I want to double the length of the simulation it requires 4 times as many total time steps.

So, just use Heun's method or the 4th order Runge-Kutta method, right? Well, here's the thing. Switching to Heun's method does significantly improves the simulation; $t_{breakdown}$ increases by about a factor of 10. But, once the method is fixed (Heun, RK4, whatever), the scaling problem remains. This puts a pretty hard ceiling on the length of my simulation.

Now, when I use Heun's method, the breakdown is slightly different. Basically, the wave dips below zero on the distantly trailing tails of the Gaussian where the function is very small (like $10^{-40}$). But as soon as it does dip below zero, this error grows exponentially and swamps the simulation. Here is an animated gif zoomed in on the trailing edge on a logarithmic scale$^\dagger$.

Question

Is there a way to fix this instability, or am I doomed to the same time scaling? Do I need to just keep increase the order of my Runge-Kutta method (RK5, RK6, etc.) in order to simulate for longer times?

(By the way, RK4 does not seem to improve much over Heun's method and take significantly longer to run.)

Ignorant Speculation

(Don't read…)

I think this has something to do with the nature of Gaussians. Euler's method breaks down because it calculates using the derivative at the beginning of the interval (which basically assumes the derivative is constant along the interval) so it performs badly for appreciable 2nd derivatives. Heun's method is better because it averages derivative at the beginning and end of the interval; it is vulnerable to strong third derivatives. I figure 4rth order Runge-Kutta is vulnerable to strong forth derivatives, although it's probably more complicated than that.

The problem with the Gaussian is that all derivatives (when normalized to the magnitude of the function) become large away from the center, so the evolution of the tails always breaks:

$g = e^{-x^2}$

$g^\prime/g = -2x$

$g^{\prime \prime}/g = 4x^2-2$

$g^{\prime \prime \prime}/g = -8x^3+12x$

But then again, it's not like Gaussians are unusal functions; if this were a problem, people would know about it and have a solution.

Also, the extreme tails of the Gaussian really shouldn't break the simulations. You think that if there were problems with derivatives for such tiny values then numerical errors introduced by machine precision would be worse.


$^\dagger$Actually, it's a crudely modified logarithmic scale which cuts out the range $[-10^{-50},10^{-50}]$ in order to displays negative values. I am about 50% sure that the apparent discontinuity of the "jump" as it falls below zero is due to crudeness of the scale. Here is the absolute value on a normal log scale. There does seem to be a weird "bounce" when it goes below zero.


Edit

Seeing as that I am getting errors to develop on the far tails of the Gaussian, I don't think adaptively adjusting will help. It doesn't seem like a special case/situation is causing the problem.

Also, I can't really check easily for trouble spots because I need to prevent this error even when I'm not in such a simple situation. Plus, the Wigner function is supposed to go negative occasionally, just not when it starts out positive and the equations of motion are so simple.

Edit2

I don't think I can use an implicit method because, for non-trivial p-dependence, solving the implicit equation

$W(x_i,p_j,t_{k+1}) – W(x_i,p_j,t_k) -\Delta t \ f[W(x_1,p_j,t_{k+1})]=0$

for $W(x_i,p_j,t_{k+1})$ isn't computationally feasible because $W(x_i,p_j,t=t_0)$ is a huge 2 dimensional matrix. The Scholarpedia article by Hamdi et al (suggested by Jitse Niesen below) supports this.

Best Answer

What's the CFL condition of the method for your problem? This is the main thing missing from the formulation of the problem and many times at the heart of instabilities of the type you discuss. As the wikipedia site says, implicit method are usually better for maintaining a reasonable CFL condition.

The CFL condition will tell you the relationship between $\Delta t$ and $\Delta x$ that causes spurious oscillations to die down. By figuring this out in advance you can find a method that gives you a relationship that is better for your problem.

The standard way to find the CFL condition is to look for a wave solution $e^{\omega t + k\dot x}$, plug it into the numerical method and see what the numerical dispersion relation (between $\omega$ and $k$) is. The point is that for a $k$ that corresponds to an oscillation every $2\Delta x$ ($k=\pi/\Delta x$) you must have decay (meaning $Re(\omega)<0$, imaginary part not important).

The condition that the real part is negative is the CFL condition. Now, for your problem, wave solutions are probably not going to be solutions....because of the nonconstant f(x). so I would replace both by constants just to see how the sizes of f and f'' play in the CFL condition....