[Math] Comparing powers without logarithms

arithmeticproject euler

Related to this question and this Project Euler problem (Problem 99), I came up with a recursive algorithm for comparing two numbers of the form $x^y$ (with $x>1$ and $y\ge 0$) without explicit use of logarithms.

To compare $x_1^{y_1}$ or $x_2^{y_2}$ (resulting in "$\lt$", "$=$", or "$\gt$"):

  1. If $x_1=x_2$, compare $y_1$ to $y_2$ and return the result.
  2. If $x_1\gt x_2$, compare $x_2^{y_2}$ and $x_1^{y_1}$ and return the opposite result.
  3. (Now $x_1 \lt x_2$.) If $y_1\le y_2$, then return "$\lt$".
  4. (Now $y_1 \gt y_2$.) Compare $x_1^{y_1-y_2}$ to $\left(x_2/x_1\right)^{y_2}$ and return the result.

The last step is justified by noting that
$$
x_1^{y_1} – x_2^{y_2} = x_1^{y_2}\left(x_1^{y_1-y_2}-(x_2/x_1)^{y_2}\right),
$$
so the new comparison has the same result as the old one; and the algorithm converges, in the sense that the exponents and the bases decrease at each iteration (to $0$ and to $1$ respectively). My question is, what exactly is this algorithm doing, and how many iterations should I expect it to take to do it? It looks like Euclid's method for finding the greatest common divisor, a little. Is there a more precise correspondence, and are the iterates meaningful?

Best Answer

This is a bit long for a comment, so I'll start turning it into an answer, though there may well be more to say e.g. about the meaning of the iterates.

I find it slightly easier to think about the corresponding algorithm for comparing products without multiplication; as discussed in the comments, this is directly analogous. Let's see how this plays out in an example (assuming that none of the intermediate steps yield a successful comparison):

$$ \begin{eqnarray} 54a&\lessgtr&21b\\ 33a&\lessgtr&21(b-a)\\ 12a&\lessgtr&21(b-2a)\\ 12(3a-b)&\lessgtr&9(b-2a)\\ 3(3a-b)&\lessgtr&9(2b-5a)\\ 3(8a-3b)&\lessgtr&6(2b-5a)\\ 3(13a-5b)&\lessgtr&3(2b-5a)\\ \end{eqnarray}$$

The algorithm ends with $\gcd(54,21)=3$ and a direct comparison of the numbers $13a-5b$ and $2b-5a$, which is the same as comparing $18a$ and $7b$, the original comparison reduced by the gcd.

The efficiency of the Euclidean algorithm is usually analyzed counting each quotient and remainder computation as one step; in that case the worst case is two consecutive Fibonacci numbers, so the algorithm takes about $\log n/\log\phi$ steps. However, if you perform individual subtractions instead, this may take linear time, the worst case in this scenario being if one of the exponents is $1$ and you're merely very inefficiently multiplying out the power on the other side one factor at a time. At first I thought your algorithm might be an efficient way to organize exponentiation by squaring, but that example shows that it's more somewhat complementary to that. Indeed you could combine the two and make the complexity logarithmic by doing the intermediate steps using repeated squaring.

Two more comments: The bases never influence the control flow, except by ending it; the sequence of computations is entirely determined by the Euclidean algorithm being applied to the exponents. And if you're looking for precision and trying to compare numbers close to each other, I don't think you're gaining much other than reducing by the gcd, since you're explicitly calculating ratios of high powers of the bases, whose quotient is (the gcd-th root of) the quotient of the numbers being compared.