[Math] the difference between the Lehmann Algorithm and Lucas primality test

algorithmsprimality-test

According to my lecturer, and referenced in various textbooks and other S.O. questions, a "Lehmann" algorithm can be used to test for primality. In researching this implementation for Java, here is what my lecturer states:

Lehmann Algorithm: To test whether a number p is a prime number.

  1. Choose a random number a being less than p. Calculate r = a^((p−1)/2) mod p.
  2. If r is not 1 or –1 then p is definitely not a prime.
  3. If r=1 or –1 the likelihood that p is not prime is at most than 50 percent.

Repeat this algorithm t times, if the calculation equals to 1 or –1
but does not always equal to 1, then p is probably prime with an error
rate of 1 in (1/2^t).

Although, I think this is the same algorithm, but called the "Lucas Primality Test", with no mention of a "Lehman/n": https://en.wikipedia.org/wiki/Lucas_primality_test

Some more digging reveals this journal from an "R.S. Lehman", which defines a related procedure for factoring large integers:
http://www.ams.org/journals/mcom/1974-28-126/S0025-5718-1974-0340163-2/
This suggests the "Lehman algorithm" actually defines a different procedure.

I'm not seeing a strong connection here, what is the difference between the two? Who is responsible for the design of the algorithm my lecturer gives us?

Best Answer

The algorithm in question is based on the very short (2 page) paper:

Daniel J. Lehmann, "On Primality Tests", SIAM J. Comput. 11-2 (1982), pp. 374–375,

which is available at this link.

There is a misnomer in categorizing the likelihood that $p$ is prime as $1-2^{-t}$: for that to even make sense requires $p$ to be a random variable (it might be natural to specify that $p$ is drawn uniformly from some interval $[1,N]$ or $[N,2N]$, or uniformly after excluding multiples of small primes). But Lehmann's paper is also guilty of the same imprecision, and Bayes's Law should give a lower bound on the probability that $p$ is prime (I'm thinking something like $1 - O(2^{-t} \ln N)$).

While probabilistic primality tests yield a certificate of either primality or composite (typically but not always the latter), Lehmann's method is unusual in that it can falsely label both primes and composites. The probabilities mentioned here are analogous to $p$-values in statistics: they measure the probability, given that a number is prime/composite, of randomly choosing bases $a$ that yield the wrong conclusion.

This also makes Lehmann's quite different from Lucas's test, which is a proof of primality and demands testing of all maximal proper divisors of $p-1$, not just $(p-1)/2$. One stops (with the certainty that $p$ is prime) when one detects a single $a$ that passes all the tests.

Structurally, Lehmann's test feels closer to other composite-proving tests like Miller-Rabin and Solovay-Strassen. There is a twist necessitated by the existence of Euler pseudoprimes like $1729$, hence the added criterion that $-1$ must appear at least once.

Related Question