The algorithm in question is based on the very short (2 page) paper:
Daniel J. Lehmann, "On Primality Tests", SIAM J. Comput. 11-2 (1982), pp. 374–375,
which is available at this link.
There is a misnomer in categorizing the likelihood that $p$ is prime as $1-2^{-t}$: for that to even make sense requires $p$ to be a random variable (it might be natural to specify that $p$ is drawn uniformly from some interval $[1,N]$ or $[N,2N]$, or uniformly after excluding multiples of small primes). But Lehmann's paper is also guilty of the same imprecision, and Bayes's Law should give a lower bound on the probability that $p$ is prime (I'm thinking something like $1 - O(2^{-t} \ln N)$).
While probabilistic primality tests yield a certificate of either primality or composite (typically but not always the latter), Lehmann's method is unusual in that it can falsely label both primes and composites. The probabilities mentioned here are analogous to $p$-values in statistics: they measure the probability, given that a number is prime/composite, of randomly choosing bases $a$ that yield the wrong conclusion.
This also makes Lehmann's quite different from Lucas's test, which is a proof of primality and demands testing of all maximal proper divisors of $p-1$, not just $(p-1)/2$. One stops (with the certainty that $p$ is prime) when one detects a single $a$ that passes all the tests.
Structurally, Lehmann's test feels closer to other composite-proving tests like Miller-Rabin and Solovay-Strassen. There is a twist necessitated by the existence of Euler pseudoprimes like $1729$, hence the added criterion that $-1$ must appear at least once.
Best Answer
This checks to see if $n$ is a multiple of 2 or 3 (at the start) and then (main loop) whether $n$ is a multiple of 5,7,11,13,17,19,23,25,29,31 etc. $i$ is going up by 6 starting at 5, and it checks $i$ and $i+2$. So the only question is why every single prime is in that list? It's because every prime bigger than 3 must leave remainder 1 or 5 when you divide it by 6. So done.