[Math] Tiny Planet Algorithm

algorithmsrecursive algorithms

So I've recently been looking at the Tiny Planet images.

I've been googling a few things to try and find out how images are converted from normal to a tiny planet. Some phone apps, as well as photoshop do this.

I think Photoshop does it by converting Cartesian coordinates to Polar coordinates.
I found a good explanation of converting them here.

However, I am yet to find somewhere that describes an algorithm of the process.

Is there a set formula? or is it more complex than that?

I'm not sure if it's as easy as moving pixels and converting coordinates because I think some form of stretching must be included… hence, looking for an algorithm and this exchange as a resource. Thanks.

Best Answer

DISCLAIMER: I'm not sure if this is the original Photoshop algorithm but this looks pretty good to me.

If you have a spherical panoramic photo like this (original photo taken by Javierdebe, on flickr) enter image description here

and you want to have this enter image description here

all you have to do is to figure out each pixel on the target picture comes from which pixel on the original picture.

I added some (ugly) grid lines on the photos to better illustrate how the transformation can be done. Note that the longitude lines (the black lines) remain straight, and the latitude lines (the yellow lines) become circles, and the distance between the latitude lines remain the same (which is a big assumption). The bottom edge of the original photo shrinks to a point at the center of the target image. The left edge and the right edge meet at 6 o'clock of the target image.

If we create a polar coordinate system whose origin is at the center of the target image, and the polar axis points upward (the red line) we can easily get the polar coordinate (r, θ) of each pixel (x', y') in the target image (Note that the origin of the cartessian coordinate is at the top-left corner, and the y'-axis points downward).

$$ r = \sqrt{(H - x')^2 + (H - y')^2} $$ $$ \theta = tan^{-1}\frac{H - x'}{H - y'} $$ where H is the height of the original image, and is also the radius of the target image.

From that we can work out the coordinate (x, y) of the corresponding pixel in the original image.

$$ x = \frac{\pi - \theta}{2\pi}W $$ $$ y = H - r $$ where W is the width of the original image

The rest is simple. Just loop over each pixel in the target image area, and grab the color from the corresponding pixel in the original image.

Here is a piece of Ruby code (with RMagick), not optimized, so it's a bit slow

#!/usr/bin/env ruby
require 'rmagick'

TWO_PI = 2 * Math::PI

image_path = ARGV[0]   # Path to the original spherical panorama photo
planet_path = ARGV[1]  # Path to the tiny planet image

original = Magick::Image.read(image_path).first  # Load original image
width = original.columns
height = original.rows

target_size = height * 2
planet = Magick::Image.new(target_size, target_size)  # Create a square canvas

target_size.times do |x|
  target_size.times do |y|
    r, θ = Complex(height - y, height - x).polar  # Cheat using complex plane
    next if r > height  # Ignore the pixels outside the circle
    x_original = width / 2 - θ * width / TWO_PI
    y_original = height - r
    color = original.pixel_color(x_original, y_original)  # Grab the color from original image
    planet.pixel_color(x, y, color)  # Apply the color to the planet image
  end
end

planet.write(planet_path)
Related Question