How to Numerically Compute x ln x and Related Functions Near 0 – Numerical Analysis

approximation-theoryna.numerical-analysis

I was recently trying to find a numerical solution to a thermodynamics problem and the expression $x\ln x$ appeared in one of the computations. I did not have to find its value very near $0$, so the computer managed fine, but it got me thinking – can one make a stable numerical algorithm to compute $x\ln x$ for values near 0?

It is easy to prove that $\lim\limits_{x\to 0^+} x\ln x=0$. However, simply multiplying $x$ by $\ln x$ is not a good solution for small $x$. The problem seems to be that we are multiplying a small number ($x$) by a large number ($\ln x$).

My first thought would be to approximate it somehow. But I quickly saw that Taylor series wouldn't work, as the derivative is $\ln x + 1$, which blows up (or rather down :-)) to $-\infty$. Some kind of iterative method like Newton's method does not seem to be the solution either, because the operations needed seem to be even more messy than what we are trying to compute.

So my question is – is there some numerically stable method to compute $x\ln x$ for small values of $x$? And preferably one that is more general, so that it could be used on functions like $x^n \ln x$ – but these at least have a finite first derivative at $0$ for $n>1$.

Best Answer

Modern mathematics libraries should be able to find $\log x$ precisely for all floating-point numbers, as the algorithms for doing that have long been known and adopted. My experiments on a fairly recent Intel chip with gnu mathematics library and gcc 10 compiler confirm that.

Multiplication is even more definite: correct rounding of the last bit is guaranteed (though there can be different options available for what "correct rounding" means).

It might appear from the above that precise computation of $x\log x$ is guaranteed for any small $x>0$. However there is a reason why that doesn't happen for really tiny $x$ and there is no way to avoid it.

Floating-point numbers are usually stored with the mantissa normalized (leading bit 1, sometimes implicit). However, when the number is too small it may be impossible to normalize the number without the exponent underflowing. In this case (usually) the number is kept in unnormalized form and the number of bits of precision is reduced. This situation is called partial underflow and such numbers are subnormal or denormalized.

So, if you try to compute $x\log x$ when $x\log x$ is in the partial underflow range, $\log x$ will be computed precisely but the multiplication by $x$ cannot produce more bits of precision than numbers of the size of $x\log x$ have. Short of using different floating-point numbers, there is no solution.

If $x\log x$ is in the partial underflow range, then $x$ will be too, or maybe it will be so small as to underflow to 0. In practice $x$ will come from some earlier computation and the partial underflow means it may not be so precise as thought, which is another source of error. It isn't the fault of the function $x\log x$.

Incidentally I tested this explanation empirically and behaviour was exactly as predicted.

Related Question