[Math] Running time of Modular Exponentiation

algorithmsanalysis-of-algorithmselementary-number-theorymodular arithmetic

I am trying to understand why the modular exponentiation algorithm has a running time of about $\log(n)$ where $n$ is the exponent. Here is the algorithm:

function modular_pow(base, exponent, modulus)
    result := 1
    while exponent > 0
        if (exponent mod 2 == 1):
           result := (result * base) mod modulus
        exponent := exponent >> 1
        base = (base * base) mod modulus
    return result

I think it has something to do with the number of bits for the binary representation of the exponent. What is also an intuitive numerical example?

Best Answer

I can't read that code, but when you want to raise something to the power $n$, you can write $n$ out in binary, and then for every zero in that binary expansion you have to do a squaring, and for every one in the expansion you have to do a square and a multiply. So the number of operations you have to do is at most twice the number of bits in the binary expansion of $n$. But the number of bits in the binary expansion of $n$ is, essentially, $\log_2n$.

Related Question