Modification of Lagrange’s Remainder Theorem to calculate $\ln 2$

continuityderivativesreal-analysistaylor expansion

The following question is from Stephen Abbott's "Understanding Analysis."

Question: Explain how Lagrange’s Remainder Theorem can be modified to prove
$$
1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4} \cdots = \ln2.
$$

A question close to mine was asked here but does not address my concern.

The usual procedure used to solve such problems is to first show that the power series $f(x) = \sum_{k=1}^\infty \frac{(-1)^{k-1}x^{k}}{k}$ can be derived from that of $1/(1+x)$ so that the region of convergence for $f(x)$ would be $(-1,1)$, then to show that $f(x)$ converges at $x=1$ (using alternating series test in this case). Abel's theorem would imply that $f(x)$ converges uniformly on $[0,1]$ and continuous limit theorem would imply that $f(x)$ is continuous at $x=1$. Finally, as $f(x) = \ln(1+x)$ on $(-1,1)$, we can conclude that $\ln(2) = 1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4} \cdots$

The question however asks us to modify Lagrange’s Remainder Theorem. Following are two statements, the first of which is Lagrange’s Remainder Theorem as given in the text, and the second is the one that I have modified. Please let me know if second statement is correct and if not, please give me a hint as to where I am going wrong.

Lagrange’s Remainder Theorem (as given in the text): Let $f$ be differentiable $N + 1$ times on $(−R,R)$. Define $a_n = f^{(n)}(0)/n!$ for $n = 0,1,\cdots,N$, and let
$$
S_N(x) = a_0 +a_1x+a_2x^2 +\cdots+a_Nx^N.
$$

Given $x\neq 0$ in $(−R,R)$, there exists a point $c$ satisfying $|c| < |x|$ where the error function $E_N(x) = f(x) − S_N(x)$ satisfies
$$
E_N(x) = \frac{f^{(N+1)}(c)x^{N+1}}{(N + 1)!}.
$$

Modified Lagrange’s Remainder Theorem: Let $f$ be differentiable $N + 1$ times on $(−R,R)$, be continuous at $x=R$, and let the Taylor series of $f$ be convergent at $x=R$ ($R>0$). Define $a_n = f^{(n)}(0)/n!$ for $n = 0,1,\cdots,N$, and let
$$S_N(x) = a_0 +a_1x+a_2x^2 +\cdots+a_Nx^N.$$
There exists a point $c$ satisfying $|c| < R$ where the error function $E_N(x) = f(x) − S_N(x)$ satisfies
$$
E_N(R) = \frac{f^{(N+1)}(c)R^{N+1}}{(N + 1)!}.
$$

The proof basically runs along the same lines as the first approach that I have outlined above which uses Abel's Theorem, along with the standard proof for Lagrange’s Remainder Theorem.

Best Answer

I happen to be working through all the problems in Abbott, preparing to enter a Master's program in mathematics after a long hiatus from the subject. This problem was a real head-scratcher, and in fact I came across your post looking for help understanding it. At least I'm not alone! But, since you hadn't posted a solution, I needed to figure it out for myself. I now think I understand what Abbott was aiming at here.

What modifications are necessary to apply Lagrange’s Remainder Theorem to $x=R$? Recall that the proof of Lagrange’s Remainder Theorem involves a sequence of $x$-values:

$\qquad R > x > x_1 > x_2 > \cdots > x_{N+1} > 0$

where in the original proof, all these values are in $(-R,R)$. If $x=R$, we will therefore need to modify the first step of the proof, but note that by construction we will still have $x_1,\dots,x_{N+1} \in (-R,R)$, so the remainder of the proof will not need to be changed.

Turning to the first step of the proof, we see we need to apply the Generalized Mean Value Theorem to the functions $E_N(x)$ and $x^{N+1}$ on the interval $[0,R]$. In order to do this, we need $E_N$ continuous on $[0,R]$ and differentiable on $(0,R)$.

Now recall $E_N = f - S_N$. But $S_N$ is a polynomial, which is continuous everywhere and infinitely differentiable, so we simply need these properties to hold for $f$. Since $f$ is assumed to be differentiable (and hence continuous) on $(-R,R)$, the only additional assumption needed is that $f$ is differentiable at $x = R$.


Applying the above to the problem at hand: We know that

$\qquad f(x) = \log(1+x) = x - x^2/2 + x^3/3 - x^4/4 + \cdots$

is valid for $|x| < 1$. Since $f$ is differentiable at $x=1$, the Lagrange remainder formula holds at $x =1$ as well: that is, $E_N(1) = {f^{(N+1)}(c)\over (N+1)!}1^{N+1}$ for some $c \in (0,1)$. It is then a simple matter to compute $|E_N(1)| \le {1\over N+1} \to 0$.


What is less clear to me is whether we can draw any general conclusions about the error at $x = R$. Suppose the Taylor series for $f$ is valid on $|x| < R$; in other words, for $|x| < R$ the error $E_N(x) \to 0$. Now suppose $f$ is differentiable at $x = R$, so that $E_N(R)$ has the form given by Lagrange. Do we also have $E_N(R) \to 0$?

Well, no. For example, $f(x) = {1\over 1+x} = 1 - x + x^2/2 - x^3/3 + \cdots$ is valid on $|x| < 1$, and $f$ is differentiable at $1$, but the series is not valid for $x = 1$.

But we do have $E_N(R) \to 0$ if ${\cal E}_N(x) = {\sup |f^{(N+1)}|\over (N+1)!}x^{N+1}$ converges uniformly to $0$ on $(-R,R)$, which is often the case. Indeed in this case, for any $\epsilon > 0$ there exists $N$ such that $n \ge N$ implies $|{\cal E}_n(x)| = {\sup |f^{(n+1)}|\over (n+1)!}|x|^{n+1} < \epsilon/2$ for all $x \in (-R,R)$, in which case $|{\cal E}_n(R)| \le \epsilon/2 < \epsilon$. Now, thanks to our modification to Lagrange's theorem, we have for some $c \in (0,R)$:

$\qquad |E_n(R)| \ = \ \left|{f^{(n+1)}(c) \over(n+1)! }R^{n+1}\right| \ \le \ |{\cal E}_n(R)| \ < \ \epsilon$,

so $E_N(R) \to 0$ and the series is valid for $x = R$ as well.

With $f(x) = \log(1+x)$, we have $\sup |f^{(N+1)}| = N!$, so $|{\cal E}_N(x)| \le {1\over N+1} \to 0$ uniformly. On the other hand, in the counterexample $f(x) = {1\over 1+x}$, we have $\sup |f^{(N+1)}| = (N+1)!$, so ${\cal E}_N(x) = x^{N+1}$, which does not converge uniformly.

Related Question