Time Complexity of Computing sin(x) to t Bits of Precision

algorithmscomputational complexitycomputer sciencena.numerical-analysis

Short version of the question: Presumably, it's poly$(t)$. But what polynomial, and could you provide a reference?

Long version of the question:
I'm sort of surprised to be asking this, because it's such an extremely basic sounding question. Here are some variants on it:

  1. How much time does it take to compute $\pi$ to $t$ bits of precision?
  2. How much time does it take to compute $\sin(x)$ to $t$ bits of precision?
  3. How much time does it take to compute $e^x$ to $t$ bits of precision?
  4. How much time does it take to compute $\mathrm{erf}(x)$ to $t$ bits of precision?
  5. How much time and how many random bits does it take to generate a (discrete) random variable $X$ such that there is a coupling of $X$ with a standard Gaussian $Z \sim N(0,1)$ for which $|X – Z| < \delta$ except with probability at most $\epsilon$?

In my area of theory of computer science, no one seems to pay much attention to such questions; an algorithm description might typically read

"Generate two Gaussian random variables $Z$ and $Z'$ and check if $\sin(Z \cdot Z') > 1/\pi$"

or some such thing. But technically, one should worry about the time complexity here.

One colleague of mine who's more of an expert on these things assured me that all such "calculator functions" take time at most poly$(t)$. I well believe that this is true. But again, what polynomial (out of curiosity, at least), and what is the reference?

I kind of assumed that the answers would be in every single numerical analysis textbook, but I couldn't find them there. It seems (perhaps reasonably) that numerical analysis cares mainly about getting the answers to within a fixed precision like 32 or 64 bits or whatever. But presumably somebody has thought about getting the results to arbitrary precision, since you can type

Digits := 5000; erf(1.0);

into Maple and it'll give you an answer right away. But it seemed hard to find. After much searching, I hit upon the key phrase "unrestricted algorithm" which led me to the paper "An unrestricted algorithm for the exponential function", Clenshaw-Olver-1980. It's pretty hard to read, analyzing the time complexity for $e^x$ in terms of eight (??!) parameters, but its equation (4.55) seems to give some answers: perhaps $\tilde{O}(t^2)$ assuming $|x|$ is constant?

And really, all that work for little old $e^x$? As for erf$(x)$, I found the paper "The functions erf and erfc computed with arbitrary precision" by Chevillard in 2009. It was easier to read, but it would still take me some time to extract the answer; my first impression was $\tilde{O}(t^{3/2})$. But again, surely this question was not first investigated in 2009, was it?!

(By the way, question #5 is the one for which I really want to know the answer, but I can probably work it out from the answer to question #4.)

Best Answer

For state-of-the-art arithmetic algorithms, I'd recommend this book (a work in progress), available online, from Brent and Zimmermann: http://www.loria.fr/~zimmerma/mca/pub226.html See chapter 4.

As Steve points out, log, exp, and trig functions are $O(M(n) \log n)$ (in fact they're all calculated from log), where $M(n)$ is the cost of multiplication (theoretically $O(n\log n 2^{\log^*n})$ by Furer's algorithm) and $n$ is the number of bits accuracy. Pi also falls into this complexity. This is just theoretically, however; for less than a billion digits other algorithms with worse asymptotics are faster.

Erf is apparently harder, the book gives some algorithms based on power series and continued fractions, but it avoids giving an explicit computational cost as there are different convergence speeds in different regions.

Related Question