The Chudnovsky series is based on a hypergeometric series, which may be why you think it is expressible as a simple geometric series. However, in general hypergeometric series are not expressible as geometric series.
However, using the expression you've linked to at wikipedia, you can write a trivial implementation in Python:
$$\frac{1}{\pi}=12\sum_{k=0}^{\infty}{\frac{(-1)^{k}(6k)!(13591409+545140134k)}{(3k)!(k!)^{3}640320^{3k+3/2}}}$$
So we have:
$$\pi\approx1/\left(12\sum_{k=0}^{x}{\frac{(-1)^{k}(6k)!(13591409+545140134k)}{(3k)!(k!)^{3}640320^{3k+3/2}}}\right)$$
For some large value of $x$. So we can write the following:
# x is the limit of the summation, increase the value of x
# for a more accurate approximation.
def Chudnovsky(x):
sum = 0
while x >= 0:
sum += (((-1)**x) * factorial(6*x) * (13591409 + 545140134*k))/(factorial(3*k)*(factorial(k)**3) * (640320**(3*x + (3/2))))
x -= 1
sum *= 12
return (1/sum)
However, bear in mind it's been a while since I've done Python scripting, this script was made just based on documentation I could find on the Python site, so I'm unsure if the syntax/semantics are correct, but the concept is there.
Hope this helps.
DISCLAIMER: I'm not sure if this is the original Photoshop algorithm but this looks pretty good to me.
If you have a spherical panoramic photo like this (original photo taken by Javierdebe, on flickr)
and you want to have this
all you have to do is to figure out each pixel on the target picture comes from which pixel on the original picture.
I added some (ugly) grid lines on the photos to better illustrate how the transformation can be done. Note that the longitude lines (the black lines) remain straight, and the latitude lines (the yellow lines) become circles, and the distance between the latitude lines remain the same (which is a big assumption). The bottom edge of the original photo shrinks to a point at the center of the target image. The left edge and the right edge meet at 6 o'clock of the target image.
If we create a polar coordinate system whose origin is at the center of the target image, and the polar axis points upward (the red line) we can easily get the polar coordinate (r, θ)
of each pixel (x', y')
in the target image (Note that the origin of the cartessian coordinate is at the top-left corner, and the y'-axis points downward).
$$
r = \sqrt{(H - x')^2 + (H - y')^2}
$$
$$
\theta = tan^{-1}\frac{H - x'}{H - y'}
$$
where H
is the height of the original image, and is also the radius of the target image.
From that we can work out the coordinate (x, y)
of the corresponding pixel in the original image.
$$
x = \frac{\pi - \theta}{2\pi}W
$$
$$
y = H - r
$$
where W
is the width of the original image
The rest is simple. Just loop over each pixel in the target image area, and grab the color from the corresponding pixel in the original image.
Here is a piece of Ruby code (with RMagick), not optimized, so it's a bit slow
#!/usr/bin/env ruby
require 'rmagick'
TWO_PI = 2 * Math::PI
image_path = ARGV[0] # Path to the original spherical panorama photo
planet_path = ARGV[1] # Path to the tiny planet image
original = Magick::Image.read(image_path).first # Load original image
width = original.columns
height = original.rows
target_size = height * 2
planet = Magick::Image.new(target_size, target_size) # Create a square canvas
target_size.times do |x|
target_size.times do |y|
r, θ = Complex(height - y, height - x).polar # Cheat using complex plane
next if r > height # Ignore the pixels outside the circle
x_original = width / 2 - θ * width / TWO_PI
y_original = height - r
color = original.pixel_color(x_original, y_original) # Grab the color from original image
planet.pixel_color(x, y, color) # Apply the color to the planet image
end
end
planet.write(planet_path)
Best Answer
for intuition look at the way they use factorial base for $e = \sum_{n=1}^\infty \frac{1}{n!}$ here, $\pi$ is similar but less simple.