Number Theory – Computing the Product of p/(p – 2) Over Odd Primes

analysisanalytic-number-theorynumber theoryprime numbers

I'd like to calculate, or find a reasonable estimate for, the Mertens-like product

$$\prod_{2<p\le n}\frac{p}{p-2}=\left(\prod_{2<p\le n}1-\frac{2}{p}\right)^{-1}$$

Also, how does this behave asymptotically?


Hmm… trying to think this one out, I get

$$\left(\prod_{2<p\le n}1-\frac{2}{p}\right)^{-1}=\exp\log\left(\left(\prod_{2<p\le n}1-\frac{2}{p}\right)^{-1}\right)=\exp-\log\left(\prod_{2<p\le n}1-\frac{2}{p}\right)$$
which is
$$\exp-\sum_{2<p\le n}\log\left(1-\frac{2}{p}\right)=\exp\sum_{2<p\le n}\left(\frac{2}{p}+\frac12\left(\frac{2}{p}\right)^2+\frac13\left(\frac{2}{p}\right)^3+\cdots\right)$$
which, with P(s) the prime zeta function and f(s)=P(s)-2^s, is less than
$$\exp\left(\frac42f(2)+\frac83f(3)+\cdots+\sum_{2<p\le n}\frac{2}{p}\right)$$
which might not be a bad approximation for n large. But I can't immediately find a series for P(s) with $s\to+\infty$ and I'm not sure if there's a better way. Help?

Best Answer

Lets do this as explicitly as possible (I would like to find the constant and error term)

1 The Original Problem First, consider the identity $\log\left(1-\frac{2}{p}+\frac{1}{p^2}\right) +\log\left(1- \frac{1}{(p-1)^2} \right)=\log\left(1- \frac{2}{p} \right)$. From this, it follows that

$$\sum_{2<p\leq n}\log\left(1-\frac{2}{p}+\frac{1}{p^2}\right) +\sum_{2<p\leq n} \log\left(1- \frac{1}{(p-1)^2} \right) = \sum_{2<p\leq n}\log\left(1-\frac{2}{p}\right)$$

Multiplying by negative one and exponentiating both sides yields

$$\left(\prod_{2<p\leq n}1-\frac{1}{p}\right)^{-2} \cdot \prod_{2<n<p} \left(1- \frac{1}{(p-1)^2} \right)^{-1} = \left(\prod_{2<p\leq n}1-\frac{2}{p}\right)^{-1} $$

Recall $ \Pi_2=\prod_{2<p} \left(1- \frac{1}{(p-1)^2} \right)$ is the Twin Prime Constant, and that the one product on the left hand side converges to the reciprocal of this. It is then by one of Mertens formulas that we know $$\left(\prod_{2<p\leq n}1-\frac{1}{p}\right)^{-1}=\frac{1}{2}e^\gamma \log n + O(1)$$ where $\gamma$ is the Euler Mascheroni Constant. (Specifically this is Theorem 2.7 (e) in Montgomery Multiplicative Number Theory I. Classical Theory) Upon squaring this asymptotic result, we are able to conclude:

$$\left(\prod_{2<p\leq n}1-\frac{2}{p}\right)^{-1} = \frac{1}{4}e^{2\gamma}\Pi_2^{-1} \log^2n + O(\log n)$$

Hope that helps,

Note: The reason I substituted the twin prime constant in for that other product is because it converges very fast in comparison with the error term. I can give more details if desired, but I'll leave it as an exercise.

2 What is best possible? Can the error term be made better? Yes. It turns out we can make that error term a lot better. By using the Prime Number Theorem we find $$\prod_{2<p\leq n} \left( 1-\frac{1}{p}\right)^{-1}=\frac{1}{2}e^\gamma \log n + e^{-c\sqrt{\log n}}$$ where $c$ is the constant used in the proof of the Zero Free Region. Since $e^{-\sqrt{\log n}}$ decreases faster than any power of $\log$, we obtain a much better result upon squaring this estimate. Precisely we have:

$$\left(\prod_{2<p\leq n}1-\frac{2}{p}\right)^{-1} = \frac{1}{4}e^{2\gamma}\Pi_2^{-1} \log^2n + O\left( e^{-c\sqrt{\log n}} \right)$$

(again the convergence to $\Pi_2$ is much to rapid to interfere with the error term)

I would be willing to bet that this is the best we can do, and that any better would imply stronger results regarding the error term for $\pi(x)$, the prime counting function.

3 Numerics Just for fun, the exact constant in front of the $\log^2n$ is: 1.201303. How close is this? Well for:

$n=10$ we get an error of $0.630811$

$n=50$ we get an error of $1.22144$

$n=100$ we get an error of $0.63493$

$n=1000$ we get an error of $0.438602$

$n=10^4$ we get an error of $0.250181$

$n=10^5$ we get an error of $0.096783$

$n=10^6$ we get an error of $0.017807$

Where each time the error is positive. That is the product seems to be slightly larger than the asymptotic (but converging fairly rapidly). However, my intuition tells me it is almost certain (I will not prove it here) that the error term oscillates between negative and positive infinitely often.

4 Under Riemann Hypothesis

If we assume the Riemann Hypothesis, the error term is bounded by $$\frac{C\log^2 x}{\sqrt{x}}$$ for some constant C. By analyzing the above data with numerical methods, the error seems to be best fitted by $\frac{C\log x}{\sqrt {x}}$.

Related Question