Problems with Taylor Series to Approximate Square Roots

approximationrootstaylor expansion

I am currently looking into comparing different methods to compute the value of square roots, and I've decided to compare two relatively well known methods – the famous Babylonian Method and the method of using Taylor Series to approximate the value of the square root.

The problem that I've been having is that I haven't exactly been able to sum up (in a manner that pleases me) the different limitations and/or benefits of using Taylor Series to approximate square roots. The only thing I've been able to catch is that using this method is a lot slower than the Babylonian Method.

So does anyone know any clear limitations and/or benefits of using Taylor Series to approximate square roots? Thanks!

Best Answer

The relevant Taylor series for computing a square root can be given by Newton's binomial series. Let $g : [1,\infty) \to [1,\infty),g(x)=\lfloor x^{1/2} \rfloor$. (To deal with $\sqrt{x}$ for $x \in (0,1)$, write it as $((x^{-1})^{1/2})^{-1}$.)

Then

$$x^{1/2}=\left (g(x)^2+ \left (x-g(x)^2 \right ) \right )^{1/2}=g(x) \left ( 1 + \frac{x-g(x)^2}{g(x)^2} \right )^{1/2} \\ = g(x) \sum_{k=0}^\infty {1/2 \choose k} h(x)^k$$

where $h(x):=\frac{x-g(x)^2}{g(x)^2}$. This ${1/2 \choose k}$ is a generalized binomial coefficient, given by $\frac{\Gamma(3/2)}{k! \Gamma(3/2-k)}$.

Now $\Gamma(3/2)=\sqrt{\pi}/2$. For $k>1$, this $\Gamma(3/2-k)$ is a bit mysterious, but we may connect it back to positive arguments of the $\Gamma$ function using the reflection formula:

$$\Gamma(z)=\frac{\pi}{\Gamma(1-z) \sin(\pi z)}$$

so

$$\Gamma(3/2-k)=\frac{\pi}{\Gamma(k-1/2) \sin(\pi(3/2-k))}=(-1)^{k+1} \frac{\pi}{\Gamma(k-1/2)}.$$

Thus ${1/2 \choose k}=\begin{cases} 1 & k=0 \\ (-1)^{k+1} \frac{\Gamma(k-1/2)}{2 \sqrt{\pi} k!} & k>0\end{cases}.$

So the series as a whole is

$$x^{1/2}=g(x) \left ( 1 + \sum_{k=1}^\infty (-1)^{k+1} \frac{\Gamma(k-1/2)}{2 \sqrt{\pi} k!} h(x)^k \right ).$$

The main point of this little computation was for us to realize that this method can be pretty slow. The exact speed depends on how close $x$ was to a perfect square. If we ignore constant coefficients, asymptotically you get the worst case when $x=n^2+n$ for some integer $n$, and in this case $h(x)$ is around $1/n$, or around $1/\sqrt{x}$. Similarly, $\frac{\Gamma(k-1/2)}{k!}$ behaves like $k^{-3/2}$, so in this worst case scenario you are looking at terms behaving like $k^{-3/2} x^{-k/2}$. Since the terms alternate, this means the relative error is decaying more or less like $(x^{-1/2})^k$. This is linear convergence: you get about $\log_{10}(x)/2$ correct decimal digits in each step (noting that you started with one correct digit).

In contrast, Newton's method (which for square roots is also called the Babylonian method) converges quadratically (loosely speaking, the number of correct digits doubles each iteration). Moreover Newton's method doesn't require us to precompute this $g(x)$, nor do we need an estimate of $\pi$, nor do we need to evaluate all these $\Gamma(k-1/2)$'s.

Note that the Taylor series method can be improved a bit by alternatively looking at $g(x)=\lceil x^{1/2} \rceil$, which will result in better performance when $x$ is slightly below a perfect square. But this $n^2+n$ scenario will behave pretty much the same either way.