Why subtracting 1 is considered ill-conditioned

condition numberfloating pointnumerical methods

I was reading the following to better understand stability:

Consider evaluating $f(x) = \sqrt{1+x}-1$ for $x$ near $0$. $C_f(x) = \frac{\sqrt{1+x}+1}{2\sqrt{1+x}}$ so $C_f(0) = 1$ so it is not
ill-conditioned. To compute this we would have the 3 steps: $(1)\ t_1
= 1+x$
, $(2)\ t_2 = \sqrt{t_1}$, and $f(x)=t_2-1$. Steps $(1)$ and $(2)$ are well-conditioned (condition numbers $0$ and $1/2$) while
step $(3)$ is ill-conditioned. Thus we would say the algorithm is
unstable.

I'm trying to understand why exactly the third step is ill-conditioned. Is it due to catastrophic cancellation because $t_1$ would be close to $1$? Is $1+x$ and $\sqrt{t_1}$ just not sensitive to perturbations due to floating point errors but $t_2-1$ is?

Best Answer

When you subtract two nearly equal floating point numbers you lose precision. In your example suppose we are working with seven place base $10$ numbers and let $x=10^{-4}$. Then $t_1=1+x=1.000100$ is exact. $t_2=\sqrt {t_1}=1.000050$ is within $\frac 12LSB$. But when we subtract $t_2-1$ we get $5.0 \cdot 10^{-5}$ and only those two places are good. The leading five places canceled out. That is the catastrophic loss of precision.

Related Question