Sequences and Series – Limit of Slowly Increasing Function

approximationsequences-and-series

I should be able to answer this myself, but feel insecure anyway. I want to know, whether a function f(n) is bounded if n goes to infinity (and if it's bounded, the limit). Heuristically it appears (and I assume for generality) that f(n) is monotonuously increasing. But the increase of f(n) for n=1,2,3,4,… diminuishes and -if it eventually converges, then the convergence is much too slow for the estimate for some limit.
So I look at the sequence of f(2^n) and find, that the increase (="$\Delta(n)$") seems to halve at each step. Here is the table of values.

$ \small \begin{array} {cccc}
2^n & f(2^n)& \Delta(n) =f(2^n)-f(2^n/2) & Q(n)=\Delta(n+1)/\Delta(n) \\
2 & 1.00000000000 & 1.00000000000 & 1.00000000000 \\
4 & 1.20710678119 & 0.207106781187 & 0.207106781187 \\
8 & 1.32489552200 & 0.117788740818 & 0.568734351153 \\
16 & 1.38378655210 & 0.0588910300981 & 0.499971641511 \\
32 & 1.41323838275 & 0.0294518306501 & 0.500107242156 \\
64 & 1.42796608046 & 0.0147276977050 & 0.500060518479 \\
128 & 1.43533039941 & 0.00736431894937 & 0.500031919236 \\
256 & 1.43901267941 & 0.00368228000046 & 0.500016366181 \\
512 & 1.44085384991 & 0.00184117050297 & 0.500008283655 \\
1024 & 1.44177444283 & 0.000920592923335 & 0.500004166833 \\
2048 & 1.44223474122 & 0.000460298385385 & 0.500002089651 \\
4096 & 1.44246489089 & 0.000230149674341 & 0.500001046382 \\
8192 & 1.44257996585 & 0.000115074957672 & 0.500000523580
\end{array} $

As Q(n) tends to 0.5, I think I should conclude, that f(n) is bounded because the sum of all $\Delta$ is bounded, but I feel not sure, whether this is a meaningful conclusion here.
And if it were meaningful: what would then be a reasonable estimate for the upper bound?


[update]: Heuristics suggest, that $$ f(n) \approx { {n-1 \over \log(2)} + {1 \over 2} \over n} $$ and then $$\lim_{n \to \infty} f(n) = {1 \over \log(2)} \approx 1.44269504089 $$
[end update]


The function results of an adaption of the problem, whether for some integer exponent n the sum of the n'th power of the first m numbers can equal (m+1)^n, say whether there exist some natural n>2, m>2, such that $S(m,n) = (m+1)^n $ where $S(m,n)=\sum_{k=1}^m k^n $ .

I convert this in a formulation using the bernoulli-polynomials for the sum of the n'th powers. It is not difficult to find, that m must be greater than n.
By use of the bernoulli-polynomials one can also meaningfully interpolate m to fractional values, say x, in the sum S(m,n). So d(x,n) is the difference $d(x,n) = B(x,n)- (x+1)^n$ where B(x,n) is the Bernoulli-polynomial-generalized form of S(m,n).

With this, for a given integer n, I solve for x such that d(x,n)=0. Again, in general x>n but I want to know, whether x/n is bounded… So I determine $f(n)={\text{<root x of d(x,n)>} \over n} $
Remark: in my earlier notation above in the table I used x where I meant the exponent which I now more meaningfully denote as n.


Well, while I'm writing this: possibly the current problem can be solved on a much simpler way…. (At least I've now a meaningful heuristic, see above)

But my deeper curiosity is about the idea; whether -and to what extend- the above idea of "acceleration of convergence" is/can be made valid for such a type of conclusion at all?

Best Answer

This is more of a (numerical) plausibility argument to support Gottfried's $\lim\limits_{n \to \infty} f(n) = \frac1{\log\;2}$ conjecture than an answer.

What struck me was that the error ratios $Q(n)$ as $n$ is doubled seem to indicate a decrease in error by a factor of 2. This reminded me of the decrease in error exhibited by the trapezoidal rule for numerical integration as the number of subintervals is doubled, leading me to think that Richardson extrapolation might be appropriate to use in numerically approximating the limit, just like the way Richardson extrapolation is used to improve trapezoidal estimates in Romberg quadrature.

Recall that what Richardson extrapolation essentially does is to fit an interpolating polynomial $p(x)$ to successive values of a sequence $s_i$, along with some auxiliary sequence $x_i$ that tends to 0 as $i\to\infty$ such that $p(x_i)=s_i$, and then take $p(0)$ to be the estimate of the limit of the sequence $s_i$. For this particular case, the error ratio appearing to approach 2 suggests that we take the auxiliary sequence $x_j=2^{-j}$.

The Richardson scheme proceeds as follows: letting $T_j^{(0)}=s_j$, we generate a triangular array of values

$$\begin{matrix}T_0^{(0)}&&&&\\T_1^{(0)}&T_1^{(1)}&&&\\T_2^{(0)}&T_2^{(1)}&T_2^{(2)}&&\\\vdots&\vdots&\vdots&\ddots&\\T_n^{(0)}&T_n^{(1)}&\cdots&\cdots&T_n^{(n)}\end{matrix}$$

whose entries are generated according to the recursive formula

$$T_j^{(n)}=T_{j}^{(n-1)}+\frac{T_{j}^{(n-1)}-T_{j-1}^{(n-1)}}{\frac{x_{j-n}}{x_j}-1}$$

and the $T_n^{(n)}$ (i.e., the "diagonal" elements of the array) form a sequence which (hopefully) converges faster to the limit than the original $s_j$. For this specific example, we have $s_j=f(2^{j+1})$ and $\frac{x_{j-n}}{x_j}=2^n$.

The Richardson scheme can be implemented such that only a one-dimensional array is needed, and this is what I will do in the Mathematica code that follows (specializing to the case $\frac{x_{j-n}}{x_j}=2^n$):

richardson[seq_?VectorQ] := Module[{n = Length[seq], m, res, ta},
  ta[1] = First[seq]; res = {First[seq]};
  Do[
   ta[k + 1] = seq[[k + 1]];
   m = 1;
   Do[
    m *= 2;
    ta[j] = ta[j + 1] + (ta[j + 1] - ta[j])/(m - 1);
    , {j, k, 1, -1}];
   res = {res, ta[1]};
   , {k, n - 1}];
  Flatten[res]
  ]

We then generate the (first fourteen members of the) sequence in the OP to 100 significant figures:

pa[n_Integer, x_] := Expand[Sum[(-1)^k*BernoulliB[k]*Binomial[n, k]*
   (x^(n - k + 1)/(n - k + 1)),
   {k, n}] +  x^(n + 1)/(n + 1) - (x + 1)^n]

vals = Table[x/2^j /. FindRoot[pa[2^j - 1, x] == 0, {x, 2^j - 1, 2^(j + 1) - 2}, 
    WorkingPrecision -> 100], {j, 14}];

Last[vals] (* sample value, j == 14 *)
1.442637503359895400534264282420018712540776326004454785114823591142583865448308469439211266125019043

If we apply richardson[] to vals, we get the following:

Last[richardson[vals]]
1.44269504088896340735992467998804478392151992973760702424401781700078935678750451268434910460102062

% - 1/Log[2]
-1.0138473535051260244153789098914315899303198623936805672011775182924857213751333727278493`*^-27

and the result of the extrapolation matches $\frac1{\log\;2}$ to ~ 27 digits.

Generating more members of the sequence and computing with more digits might give a more accurate value, but instead of doing this, I decided to apply a different extrapolation algorithm, the Shanks transformation, with the reasoning that if two unrelated algorithms give (almost) the same answer for a given sequence, the answer is likely correct.

I will skip the details of the mathematics behind the Shanks transformation (but see this article) and instead present Mathematica code:

wynnEpsilon[seq_?VectorQ] := Module[{n = Length[seq], ep, res, v, w},
  res = {};
  Do[
   ep[k] = seq[[k]];
   w = 0;
   Do[
    v = w; w = ep[j];
    ep[j] = 
     v + (If[Abs[ep[j + 1] - w] > 10^-(Precision[w]), ep[j + 1] - w, 
         10^-(Precision[w])])^-1;
    , {j, k - 1, 1, -1}];
   res = {res, ep[If[OddQ[k], 1, 2]]};
   , {k, n}];
  Flatten[res]
  ]

This is essentially the algorithm behind the hidden built-in Mathematica function SequenceLimit[], but I have chosen to implement the algorithm explicitly for pedagogical reasons.

We then have the results

Last[wynnEpsilon[vals]]
1.442695040888963407376933999717472189155804331509810458775727421372220122967729979363328849794520

% - 1/Log[2]
1.70093187155800517291583773568245246402780144411109037865448994778022269010133063583652`*^-20

with a result that matches $\frac1{\log\;2}$ to ~ 20 digits. (The slight loss of accuracy is the price for the generality of the Shanks transformation, which did not exploit the behavior of the error ratios.)

The results of these two different extrapolation algorithms lends credibility to the conjecture that the limit is $\frac1{\log\;2}$.

Related Question