Math History – Indian Claims Finding New Cube Root Formula

arithmeticmath-history

Indian claims finding new cube root formula

It has eluded experts for centuries, but now an Indian, following in the footsteps of Aryabhatt, one of the earliest Indian mathematicians, claims to have worked out a simple formula to find any number's cube root.

Nirbhay Singh Nahar, a retired chemical engineer and an amateur mathematician, claims he has found a formula that will help students and applied engineers to work out the cube roots of any number in a short time.

"Give me any number – even, odd, decimals, a fraction…and I will give you the cube root using a simple calculator to just add and subtract within a minute and a half. We do have methods and patterns, but no formula at the moment. Even the tables give cube roots of 1 to 1,000, not of fractions or of numbers beyond 1,000, for which people have to use scientific calculators," Nahar, who retired as an engineer from Hindustan Salts Ltd at Sambhar (Rajasthan), said.

Is there any sense to this claim? Is it possible to have an algorithm that gives the cube root which uses only additions and subtractions?

Best Answer

If by "formula", Nirbhay (the former engineer featured in the article) means "closed form formula", then the answer is obviously not; because the answers would all have to be rational, whereas e.g the cube root of 2 is not. However, if by "formula" Nirbhay means "algorithm which computes the digits in sequence, which converges on the answer and terminates in the case of integers and other terminating fractions which are perfect cubes", then the answer is yes, if you allow either simple multiplications, or perform it in base 2. (Of course, you can reduce multiplication to addition if you like, but beyond small integers or shifting numbers by place values, that feels like cheating.)

No innovation is required; it's a simple modification of the digit-by-digit algorithm for computing square roots. The intuition behind this algorithm is that at each point, you are trying to get better and better approximations to X by constructing rational numbers Y1, Y2, Y3, ... having more and more digits, whose cubes approximate X from below. We chip away at the error between X and the cubes of these approximations by trying to "complete the cube". Each new digit of X that we consider allows us to try to chip away more of the difference between Y³ and X, by defining Yj+1 = Yj + δ for a suitable increment δ; we then determine the error involved in the new approximation by computing Yj+1³ = Y³ + 3δY² + 3δ²Y + δ³. The trick is how to do this with simple arithmetic operations; but it's not too difficult.

I demonstrate the algorithm in binary, because that's the base where it would be easiest in practise to compute the cube roots. The generalization to arbitrary bases (e.g. decimal) is an exercise for the reader. To simplify the presentation, we may assume without loss of generality that the number X whose cube-root we compute is less than 8. We may repeatedly divide by 8 until this is the case; when we have obtained the square root, simply multiply the answer the same number of times by 2. This is just observing that $\sqrt[3]{8^{-n} \cdot X\;} = 2^{-n} \cdot \sqrt[3]{X\;}$, and allows me to emphasize that the algorithm works just as well for fractions as for integers. Because of how the algorithm works, it looks somewhat as though it's starting with a number in octal, and obtaining a root in binary (for square roots it looks like it produces a binary root from a number in quartal); but of course we can represent the base 8 "digits" by groups of three binary digits.

Let X = x0 + x1/81 + x2/82 + ... , where each xj ranges from 0 to 7. We compute binary digits yj representing Y = y0 + y1/21 + y2/22 + ...  such that Y3 = X, as follows.

  • If x0 = 0, set y0 = 0; otherwise, set y0 = 1. Let R := x0 − y0.

  • Set A = B = Cy0 .

  • Set j = 1, and repeat the following until the desired number of digits is obtained (or until R = 0 and only if all the remaining digits xj , xj+1 , ... are also zero):

    1. Set D = 8R + xj , and consider the largest binary digit yj such that $$ D \;\;\geqslant\;\; y_j^3 + 6Ay_j^2 + 12By_j + 8C.$$ Of course, we're working with binary digits; all the powers of yj are either zero or one. So what we're really testing is whether or not $$ D \;\;\geqslant\;\; 1 + 2A + 4A + 4B + 8B + 8C,$$ where multiplication by 2, 4, and 8 can be realized by shifting integers one, two, or three places (tacking on zeros). If the inequality above holds, set yj = 1; otherwise set yj = 0.

    2. The integer A will actually be the number whose binary representation is the bit-sequence y0y1 ... yj−1 in order; B is equal to A², and C is equal to A³. To maintain this invariant for the next iteration j, we compute new values A', B', and C' as updated values. For A, we define $$ A' = 2A + y_j \;. $$ For B', we take advantage of the fact that we have B and A: $$\begin{align*} B' &= (A')^2 = (2A + y_j)^2 = 4A^2 + 4Ay_j + y_j^2 \\&= 4B + 4Ay_j + y_j^2 \;. \end{align*}$$ Similarly, for C': $$\begin{align*} C' &= (A')^3 = (2A + y_j)^2 = 8A^3 + 12A^2y_j + 6Ay_j^2 + y_j^3 \\&= 8C + 8By_j + 4By_j + 4Ay_j^2 + 2Ay_j^2 + y_j^3 \;. \end{align*}$$ Again, multiplications by zero, one, or powers of two are trivial inbinary representation. We then update A := A', B := B', and C := C'.

    3. We update R, which is meant to represent the error of C as an approximation to the integer formed by the first j digits (in octal) of X: we set $$ R' = \bigl(8R + x_j\bigr) - C' = D - C' \;,$$ and then set R := R'.

  • If ever we obtain R = 0 and with all of the remaining digits of X equal to zero, we stop (with an exact solution).

In fact, it is not very difficult to modify this algorithm to obtain a procedure for extracting fourth roots, fifth roots, or nth roots for any constant n (though the number of parameters you have to carry around with you, and the sizes of the additions you have to carry out, will grow as you work harder and harder to avoid explicitly performing any multiplications).

Related Question