Is there a tighter error bound for this convergent version of Stirling’s series

approximation-theorygamma functionsequences-and-seriesupper-lower-bounds

I have written an implementation of the log factorial function which employs a convergent version of Stirling's series. This is the formula in the bottom of page 12 of R. Schumacher, "Rapidly Convergent Summation Formulas involving Stirling Series", 2016.

Usually, Stirling's approximation diverges, and this divergence is unsuitable for my purposes of exact sampling algorithms, where series expansions must converge in order for the algorithm to halt almost surely. This is why I implemented Schumacher's convergent version of Stirling's series. In fact, my implementation calculates the log factorial as an interval that bounds the true log factorial from above and below, and this bound gets tighter as more terms are added.

But unfortunately, since the formula from the paper is monotonically increasing and that paper didn't specify the exact rate of convergence or an upper bound on the error incurred when truncating the series (a bound required for my purposes of exact sampling), I employed an ad hoc error bound in my implementation: The error is—$$(|{s_n}| + |{s_{n+1}|}) \times n,$$where n is the number of terms added, $s_n$ is the last term added, and $s_{n+1}$ is the first neglected term.

This error bound is added to the final interval's upper bound. However, experiments show this new upper bound to be rather loose compared to the lower bound. And this increasingly shows as more terms of the series are added, which suggests to me that the error bound could be improved substantially.

Thus I ask: Is there a tighter error bound for this formula than the one I just gave?

Best Answer

Experimentally, the series appears to have geometric or near geometric convergence. This suggests that Shank's transformation may prove useful for getting a more accurate result. Experimentally this appears to be the case, giving more significant figures of accuracy as $n$ increases. This implies you may use the rough error estimate of

$$\epsilon_n\approx\left|\frac{t_{n+1}^2}{t_{n+2}-t_{n+1}}\right|$$

where $t_n$ is the $n$th term in your series, and that

$$S_n-\frac{t_{n+1}^2}{t_{n+2}-t_{n+1}}$$

is more accurate than $S_n$ than your $S_n=\sum_{k\le n}t_n$.