Numerical Methods – Improving Numerical Stability of a Function

algebra-precalculusfunctionsnumerical methods

I'm writing a program that needs to evaluate the function $$f(x) = \frac{1 Рe^{-ux}}{u}$$ often with small values of $u$ (i.e. $u \ll x$). In the limit $u \to 0$ we have $f(x) = x$ using L'H̫pital's rule, so the function is well-behaved and non-singular in this limit, but evaluating this in the straightforward way using floating-point arithmetic causes problems when $u$ is small: then the exponential is quite close to 1, and when we subtract it from 1, the difference doesn't have great precision. Is there a way to rewrite this function so it can be computed more accurately both in the small-$u$ limit and for larger values of $u$ (i.e. $u \approx x$)?

Of course, I could add a test to my program that switches to computing just $f(x) = x$, or perhaps $f(x) = x – ux^2/2$ (second-order approximation) when $u$ is smaller than some threshold. But I'm curious to see if there's a way to do this without introducing a test.

Best Answer

We once had this (or at least something very similar) in my first numerical analysis class:

There it was proposed to calculate $f$ in terms of $y = \exp(-ux)$ instead. I.e. first set $y$ and then calculate $$f(x) = \frac{1-e^{-ux}}{u} = -x\frac{1-y}{\log(y)}$$

I have never understood why this should have any effect on the calculation, however... (It may only have an effect, if we make some assumptions on how the values get calculated inside the computer).

Edit: J.M. gave a very nice link in the comments, where it is explained why the above works!

If you make some numerical tests, let me know what you get (I'd be interested whether this actually does what they claimed it would)! ;)

Related Question