[Math] Newton-Raphson for reciprocal square root

approximationconvergence-divergencenumerical methodsreal-analysis

I have a question about using Newton-Raphson to refine a guess of the reciprocal square root function. The reciprocal square root of $a$ is the number $x$ which satisfies the following equation:

$$x^{-2} = a$$

So we are looking for the root of the following equation:

$$x^{-2} – a = 0$$

Applying the Newton-Raphson method then leads to the following:

$$x_{n+1} = x_n – {f(x_n) \over f'(x_n)} = x_n – {x_n^{-2} – a \over -2x_n^{-3}} = x_n(1.5 – 0.5ax_n^2)$$

Before looking up the above standard solution, I tried to come up with my own equation:

$$x^2 = {1 \over a}$$

In this case, we are looking for the root of a different equation:

$$x^2 – {1 \over a} = 0$$

And the Newton-Raphson method gives us:

$$x_{n+1} = x_n – {f(x_n) \over f'(x_n)} = x_n – {x_n^2 – {1 \over a} \over 2x_n} = 0.5(x_n + {1 \over ax_n})$$

Is there anything wrong with this alternative approach, and why would I choose one over the other?

Best Answer

As Daniel Fischer's Comment points out, your alternative requires a division with each iteration. Historically your alternative was the "standard" approach to applying Newton's method to finding square roots, being older by centuries than Newton's calculus! See for example the Babylonian method.

The slowness of division vs. multiplication on early mainframes motivated the division-free Newton iteration for solving $x^{-2} = a$, as noted in the Question:

$$ x_{n+1} = x_n(1.5 - 0.5 * a * x_n^2) $$

Despite the general acceleration of floating point operations on microprocessors, division continues to be significantly slower than multiplication on popular platforms. It follows that keeping the count of division operations low is still a valid design consideration.

Note that if $\sqrt{a}$ is really needed, rather than $1/\sqrt{a}$, we can get it with one additional multiplication:

$$ \sqrt{a} = a \cdot (1/\sqrt{a}) $$

Related Question