A strange occurrence in the decimal digits of $\pi$

computer scienceconvergence-divergencepisequences-and-series

I was messing around with various ways to calculate $\pi$ with my computer, and I noticed something a bit strange. I was using $\frac{1}{1}-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+… = \frac{\pi}{4}$, and computing this sum to varying degrees of accuracy. Using pi = 4*sum([((-1)**i)/(2*i+1) for i in range(n)]), I got the following results for different values of n (I have highlighted in bold the digits which are incorrect):

n result
1,000 3.140592653839794
10,000 3.141492653590034
25,000 3.141552653589803
80,000 3.1415801535897496
100,000 3.1415826535897198
125,000 3.141584653589728
160,000 3.1415864035897374
200,000 3.1415876535897618
250,000 3.141588653589781
300,000 3.141589320256464
500,000 3.141590653589692
600,000 3.1415909869230147
1,000,000 3.1415916535897743

It seems that when n has $2$ and $5$ as its only prime factors, there is a strange collection of correct digits after the first incorrect digit. Furthermore, it seems that the number of incorrect digits in the first string of incorrect digits increases in some way with the difference between the power of $2$ and the power of $5$. (I haven't tested enough values of n to deduce much more about this though)

I can't tell if this an artefact of the way the decimal computation is done by the computer, or if it the result of some theorem about the convergence of this series. Does anyone have any insights as to why this might be happening?

Best Answer

This is a known property of the Leibniz–Gregory series, and has been used to actually compute $\pi$ to many digits using this series. It arises from the Euler–Maclaurin formula: $$\frac{\pi}{2} - 2 \sum_{k=1}^\frac{N}{2} \frac{(-1)^{k-1}}{2k-1} \sim \sum_{m=0}^\infty \frac{E_{2m}}{N^{2m+1}}$$ where $E_n$ are the Euler numbers. When $N$ (the number of terms) factors into twos and fives only (such as powers of $10$), the error (last sum) becomes a sum of finite decimals.

Related Question