[Math] Accurate floating-point linear interpolation

floating pointinterpolation

I want to perform a simple linear interpolation between $A$ and $B$ (which are binary floating-point values) using floating-point math with IEEE-754 round-to-nearest-or-even rounding rules, as accurately as possible. Please note that speed is not a big concern here.

I know of two basic approaches. I'll use the symbols $\oplus, \ominus, \otimes, \oslash$ following Knuth [1], to mean floating-point addition, subtraction, product and division, respectively (actually I don't use division, but I've listed it for completeness).

(1) $\quad f(t) = A\,\oplus\,(B\ominus A)\otimes t$

(2) $\quad f(t) = A\otimes(1\ominus t)\,\oplus \,B\otimes t$

Each method has its pros and cons. Method (1) is clearly monotonic, which is a very interesting property, while it is not obvious at all to me that that holds for method (2), and I suspect it may not be the case. On the other hand, method (2) has the advantage that when $t = 1$ the result is exactly $B$, not an approximation, and that is also a desirable property (and exactly $A$ when $t = 0$, but method (1) does that too). That follows from the properties listed in [2], in particular:

$u\oplus v = v\oplus u$

$u\ominus v = u\oplus -v$

$u\oplus v = 0$ if and only if $v = -u$

$u\oplus 0 = u$

$u\otimes 1 = u$

$u\otimes v = 0$ if and only if $u = 0$ or $v = 0$

In [3] Knuth also discusses this case:

$u' = (u\oplus v)\ominus v$

which implicitly means that $u'$ may or may not be equal to $u$. Replacing $u$ with $B$ and $v$ with $-A$ and using the above rules, it follows that it's not granted that $A\oplus(B\ominus A) = B$, meaning that method (1) does not always produce $B$ when $t = 1$.

So, here come my questions:

  1. Is method (2) guaranteed to be monotonic?
  2. If not, is there a better method that is accurate, monotonic and yields $A$ when $t = 0$ and $B$ when $t = 1$?
  3. If not (or you don't know), does method (1) when $t = 1$ always overshoot (that is, $A\oplus(B\ominus A)=A+(B-A)\cdot t$ for some $t \geq 1$)? Always undershoot (ditto for some $t \leq 1$)? Or sometimes overshoot and sometimes undershoot?

I assume that if method (1) always undershoots, I can make a special case when $t = 1$ to obtain the desired property of being exactly equal to $B$ when $t = 1$, but if it always overshoots, then I can't. That's the reason for question 3.

EDIT: I've found that the answer to question 3 is that it sometimes overshoots and sometimes undershoots. For example, in double precision:

-0x1.cae164da859c9p-1 + (0x1.eb4bf7b6b2d6ep-1 - (-0x1.cae164da859c9p-1))

results in 0x1.eb4bf7b6b2d6fp-1, which is 1 ulp greater than the original, while

-0x1.be03888ad585cp-1 + (0x1.0d9940702d541p-1 - (-0x1.be03888ad585cp-1))

results in 0x1.0d9940702d540p-1, which is 1 ulp less than the original. On the other hand, the method that I planned (special casing $t=1$) won't fly, because I've found it can be the case where $t < 1$ and $A\oplus(B\ominus A)\otimes t > B$, for example:

t = 0x1.fffffffffffffp-1
A = 0x1.afb669777cbfdp+2
B = 0x1.bd7b786d2fd28p+1

$A \oplus (B \ominus A)\otimes t =\,$ 0x1.bd7b786d2fd29p+1

which means that if method (1) is to be used, the only strategy that may work is clamping.

Update: As noted by Davis Herring in a comment and later checked by me, special casing t=1 actually works.


References

[1] D.E.Knuth, The Art of Computer Programming, vol. 2: Seminumerical algorithms, third edition, p. 215

[2] Op. cit. pp. 230-231

[3] Op. cit. p.235 eq.(41)

Best Answer

1) Method 2 is not always monotonic. Counterexample in double precision:

$$A = B = 4000\quad t=\tt{0x1.b2f3db7800a39p-2}$$

From there, it results that:

$$1\ominus t=\tt{0x1.26861243ffae4p-1}$$

$$4000\otimes \tt{0x1.26861243ffae4p-1}\oplus 4000\otimes \tt{0x1.b2f3db7800a39p-2}=\tt{0x1.f400000000001p+11}\approx 4000.0000000000005$$

(obviously, when $t = 0$ and $t = 1$ the result equals 4000, so it ascends then descends, therefore it's not monotonic).

2) The second question asks for a better method. Here is a candidate:

$$lerp_{A,B}(t)=\begin{cases}A\oplus(B\ominus A)\otimes t&& \text{if}\,\ t<0.5 \\ B\ominus(B\ominus A)\otimes(1\ominus t)&&\text{otherwise}\end{cases}$$

This method has the following properties:

  • It matches the endpoints (obvious).
  • Both halves are monotonic (obvious).
  • Exact when $A=B$ (obvious)
  • Seems to be monotonic in the change of intervals.

The latter is the only part that is not proven. It turns out that there are many values of $A$ and $B$ with $A<B$ for which $A\oplus(B\ominus A)\otimes 0.5>B\ominus(B\ominus A)\otimes 0.5$; however, after testing many billions of single- and double-precision numbers, I could not find a single case violating either of these implications:

Let $s=0.5-\mathrm{ulp}(0.5)/2\quad$(the number immediately preceding 0.5), then

$A<B\implies A\oplus(B\ominus A)\otimes s\le B\ominus(B\ominus A)\otimes 0.5$

$A>B\implies A\oplus(B\ominus A)\otimes s\ge B\ominus(B\ominus A)\otimes 0.5$

Update: The reason it works seems to have to do with this fact: barring underflow, for any positive binary floating point number $u, u\otimes s < u/2$, and similarly for negative $u$ changing the inequality direction.

Or put another way, under the same assumption, for base 2 and precision $p, u\otimes\frac{(2^p-1)}{2^p} <u$. It can be shown that the bit that determines rounding (the bit next to the last bit of the mantissa) is always zero when performing that product, therefore the product is always rounded towards zero. That multiplication has the effect of subtracting 1 ulp from the number (or 1/2 ulp if the input number has a mantissa of 1.0).


I've deleted my two other proposals after discovering that both violated monotonicity.

Related Question