Definition – Why is ? = 3.14… Instead of 6.28…?

definitionpi

Inspired by a paper (from 2001) entitled Pi is Wrong:

Why is $\pi$ = 3.14… instead of 6.28… ?

Setting $\pi$ = 6.28 would seem to simplify many equations and constants in math and physics.

Is there an intuitive reason we relate the circumference of a circle to its diameter instead of its radius, or was it an arbitrary choice that's left us with multiplicative baggage?

Best Answer

For mathematicians, $2\pi$ is a more natural number than $\pi$ because this is the circumference of the circle. The value $2\pi$ appears in things related to the circle such as Fourier transforms (as the complex units form a unit circle with circumference $2\pi$). Thus the symmetric, unitary formula for the Fourier transform in terms of angular frequency $\omega$, for a function $f(x)$ is:
$$ \hat{f}(\omega) = \frac{1}{\sqrt{2\pi}}\int f(x)\;e^{-i\omega x}\;dx$$ The subject has been surfacing recently, for instance see Science on MSNBC.com, June 29, 2011: "Mathematicians want to say goodbye to pi."


The original use of $\pi$ had to do with the relationship between the circular measurement of circles (their circumferences) and the straight line measurement of them (their radius or diameter). If $\pi = 3.14...$ then it is the diameter that is related to the circumference. If $\pi = 6.28...$ then it is the radius that is related.

Relating the radius to the circumference may be more convenient for modern students, but $\pi$ was defined by carpenters and other artisans. It's easier and more accurate to measure the diameter than the radius. For example, if the object is a hoop, one always measures the diameter first and from this one obtains the radius.

Given a circle (perhaps on paper) one instinctively measures its diameter by maneuvering a ruler to obtain the largest difference between opposite sides. To measure the circle's radius an additional point is required, the center of the circle. This situation is fairly common in construction. For example, if one cuts a tree in two, the diameter is easily measured whereas the radius can be measured easily only if the tree has grown and been cut symmetrically. Otherwise the center of the circle must be found by construction and this process introduces measurement error and additional possibilities for mistakes.

In short, $\pi$ is defined as: $$\pi = \frac{\textrm{circumference}}{\textrm{diameter}}$$ because of the historical fact that $\pi$ was used for practical construction.


The oldest example of a calculation that a modern person would use $\pi$ in is the Rhind Mathematical Papyrus. The papyrus includes various questions. Unfortunately none requires the computation of a circumference of a circle. However, there is a problem where one computes the volume of a cylindrical granary. In that calculation, they use the diameter of the granary (as 9), rather than the radius of the granary (i.e. 4.5). Thus the oldest evidence we have for mathematical calculation verifies that the ancients were more inclined to measure diameters than radii. And consequently, $\pi$ was naturally defined by them as the ratio of the diameter to the circumference, rather than the ratio of the radius to the circumference.