[Math] Computing the running time of the Fermat primality test

algorithmscomputational complexityprimality-test

I have a question concerning the Fermat primality test and its running time. According to Wikipedia: "Using fast algorithms for modular exponentiation, the running time of this algorithm is $$O(k \times\log^{2}n\times\log\log{n}\times\log\log\log{n})$$ where $k$ is the number of times we test a random $a$, and $n$ is the value we want to test for primality."

Since I am not familiar with the techniques as to how running time is calculated, other than the overall definition, I was wondering whether anyone would be so kind as to illuminate how this result above can be computed.

Best Answer

The factor $k$ should be clear.

To compute $a^{n-1}$ one can use repeated squaring and multiplication, which requires one or two modular multiplications per bit of $n$. As the number of bits is $\log n$, we get a factor of $\log n$.

A modular multiplication can be done by adding up to $\log n$ bitshifted copies of a factor. Each addition uses at most twice as many bits as $n$ has, so takes $O(\log n)$ steps; the reduction modulo $n$ might for example be done by using $\log n$ precomputed values, one for each of the "higher" bits. At least this does not hurt the O(\log n) for the addition.

With this simple method, we obtain thus $$O(\underbrace{k}_{\text{repeats}}\times\underbrace{\log n}_{\text{expon.}}\times\underbrace{\log n}_{\text{mult.}}\times \underbrace{\log n}_{\text{add}})=O(k\times \log^3n).$$ To improve by a factor of $\frac{\log\log n\times\log\log\log n}{\log n}$, one needs to employ slightly smarter methods (I guess FFT or something like that).

Related Question