Expected runtime analysis for sums of four squares (Rabin and Shallit)

algorithmscomputer sciencenumber theoryprimality-testprime numbers

I've been reading a paper of Rabin and Shallit ("Randomized Algorithms in Number Theory"), which gives a brief sketch of an ERH-conditional algorithm to compute a representation of a positive integer $n$ as a sum of four squares, originally presented in some unpublished notes. The claimed expected time complexity of the algorithm is $O(\log^2 n)$. I understand why the algorithm is correct (conditional on ERH), but I'm confused about why the time complexity is as claimed. In particular, the key step is the computation of a prime $p \leq n^5$ satisfying a certain congruence condition mod $4n$. ERH guarantees that if you keep choosing numbers satisfying this congruence condition in $[1, n^5]$ uniformly and independently at random, then you'll come across a prime in expected $O(\log n)$ time. But still, at every trial you must test whether the candidate $p$ is prime (in fact I was considering the possibility of seeing what happens later in the algorithm if $p$ is composite, hoping that it will terminate quickly with a wrong answer or something, but this doesn't seem to be the case; also Rabin–Shalit specifically writes on page S243 to test it for primality).

So in order to satisfy the desired time complexity, this means that the amortized time complexity of the primality test needs to be $O(\log n)$. As far as I know, even if you allow randomization there isn't any known algorithm for primality testing with this complexity (especially in 1986 when this paper was published) unless you allow some constant error probability. Can anyone point out what I am missing?

Best Answer

As discussed in the comments, apparently the number used need to be a prime. We only need to find a square root of $-1$ modulo that number, and the prime density from ERH is used only to show that we have a certain probability of doing so.

Related Question