[Math] Bisection Method – True error versus Approximate error

bisectionnumerical methods

From the book "Numerical Methods for Engineers", by Steven C. Chapra, they state the true error is always less than the approximate error, and therefore, it is safe to use the Scarborough Error formula for asserting at least $n$ significant figures:

$$\epsilon_t\leq \epsilon_a \lt (0,5*10^{2-n}) \%$$

I can see that always holds for the absolute true error and the absolute approximate error, since the following also holds:

  • Let $X_R$ be the real root between the upper limit $x_u$ and lower limit $x_l$
  • Let $\Delta X$ be the distance between the limits $\Delta X = x_u-x_l$
  • Let $X_r^{new}$ the new calculated approximate root $(x_u+x_l)/2$
  • Let $X_r^{old}$ be the old approximate root. In this context, $x_l$ or $x_u$ depending on the last iteration.

Since the real root won't overflow the limits, it is clear to see that:
$$|X_r^{new}-X_r^{old}| = \frac{\Delta X}{2}$$
$$|X_R-X_r^{new}| \leq \frac{\Delta X}{2} = |X_r^{new}-X_r^{old}|$$
$$\therefore |E_t| = |X_R-X_r^{new}| \leq |E_a| = \frac{\Delta X}{2}$$

So the statement is true for the absolute error. What I haven't been able to proof (and I am having my doubts that is always true), is the same statement for the relative errors:

$$\epsilon_a = |\frac{X_r^{new}-X_r^{old}}{X_r^{new}}|\times 100\%$$
$$\epsilon_t = |\frac{X_R – X_r^{new}}{X_R}|\times 100\%$$

Even more, I've been having confusions on what happens when the real root is very close to zero and the estimated root is still far away from it. Wouldn't it make it possible for the real true relative error to be higher than the approximate relative one?

Final questions are, how can I proof that statement is true for the relative errors, and how can I be sure it is safe to use the first inequality given as a stop condition for the algorithm if the statement doesn't always hold?

Thanks in advance.

Best Answer

You are correct that relative error may actually be much higher than expected (infinite even if the actual root is zero). Because of this, it is impossible to assert the claimed inequalities.

If the root is non-zero and bound bounds have the same sign, then an upper bound on the relative error may be found by simply dividing by the estimate closer to zero.

To compensate for both cases (small and large answers), usually a combination of absolute and relative errors are used. This prevents the algorithm from failing to terminate if the root is zero (relative error failure) or very large (absolute error failure). Commonly one uses a stopping criterion in the form of

$$|x_u-x_l|\le\epsilon_\mathrm{abs}+\epsilon_\mathrm{rel}\cdot x_r$$

where $\epsilon_\mathrm{abs}$ and $\epsilon_\mathrm{rel}$ are the desired absolute and relative errors.

Related Question