[Math] Time complexity to calculate a digit in a decimal

algorithmsarithmeticcomputational complexity

As we know, it is quiet fast to calculate any digit in a rational number. For example, if I'm given 1/7 (0.142857 142857 …) and any integer K, I could easily return the Kth digit of 1/7, by doing a integral modular.

But this kind of easy things won't happen on some other numbers, like PI. (this wiki page http://en.wikipedia.org/wiki/Calculating_pi#Digit_extraction_methods says that it costs O(n^2) to calculate the Nth digit in PI)

After that observation I make a rudimentary assertions (if we assume that modular operation is constant time complexity):

  • For any rational number (and its decimal form), it costs constant time to calculate its any digit;
  • Conversely, given a number, if it costs constant time to calculate its any digit, it is a rational number.

I have tried and I think it's close to prove the first one, but I don't have any idea about the second. Have anyone got any idea about that? Or have I missed any articles describing this kind of problems?

Moreover, if we call the time complexity to calculate a digit in a decimal as its digit time complexity (seems we need a better name), we could partition all real numbers by its digit time complexity. Then how to calculate the cardinal for each subset?

Best Answer

This is basically the Hartmanis-Stearns Conjecture, and it is a major open question. The basic difference between your question and the Conjecture is basically a formalization of "constant time". See http://rjlipton.wordpress.com/2012/06/04/transcendental-aspects-of-turing-machines/ and http://rjlipton.wordpress.com/2012/06/15/why-the-hartmanis-stearns-conjecture-is-still-open/ for lots of discussion on the topic.

In a nutshell, the Hartmanis-Stearns Conjecture considers the complexity of computing the next digit in the decimal expansion. Rather than taking $n$ as an input and outputting the $n$-th digit (as you propose), the machine prints the decimal expansion forever. The catch is that it must print the digits at a constant rate. This is known as being computable in "real time".

The HS Conjecture says that any such number is either rational or transcendental.


Let's take a look at your two claims.

For any rational number (and its decimal form), it costs constant time to calculate its any digit;

The intuition behind this is that any rational number's decimal expansion is eventually periodic. In other words, there is a fixed prefix, followed by a repeating pattern forever. So to output the $n$th digit, we have two cases. If $n$ is within the initial prefix, we output that digit (from a pre-computed table). Otherwise, we compute $n \bmod p$ and return that element of the repeating pattern.

Unfortunately, this is not technically "constant time". Both checking if $n$ is within the prefix and computing $n\bmod p$ require time $O(\log n)$ since you have to read every bit.

In the HS version, the machine can remember its location between digits. It therefore only has to increment a pointer -- either move to the next spot in the pre-computed table or move to the next location in the repeating pattern. These are both actually constant time operations.

Conversely, given a number, if it costs constant time to calculate its any digit, it is a rational number.

Again, we run into technicalities of what exactly constant-time means. Under many reasonable encoding schemes for $n$ (ie, little-endian binary) this does indeed imply rationality.

Other reasonable encodings (ie, big-endian binary) yield transcendental numbers. Consider the number whose $n$-th binary digit is 0 iff the second bit in $n$ (written in big-endian binary) is 0. This number is clearly not periodic, since the runs keep increasing in length, so it's not rational. In fact, it's actually transcendental.


Note that the Hartmanis-Stearns conjecture does not claim that every transcendental number can be printed in real time. I don't have a counter-example handy (please leave a comment if you do!)

Another issue is that the HS conjecture allows for extremely-complicated calculations for some digits, provided that they are preceded by a sufficiently-long trivial stretch. Converting the big-endian example above into the real-time formalism uses this trick, for example. However, if you view the theorem as a negative statement about computing irrationals, then the loophole actually strengthens the conjecture (since more potential algorithms are legal).

Another way of formalizing constant time is the word-model. In this model, basic integer operations ($+$, $-$, $<$, $=$, etc) are considered constant time. I don't know what the status of the conjecture is in that model. (Pointers greatly appreciated!)

Related Question