Proving that the limit of the lagrange error bound of $Ln(1-x)$ converges to 0 at $|x| \lt 1$

calculusconvergence-divergencetaylor expansion

It is a well known fact that:
$$Ln(1-x) = \sum_{k=1}^{\infty} -\frac{x^k}{k}, \qquad \text{For } |x| \lt 1$$
It can easily be proven that the sum converges only if $|x| \lt 1$. For one to say that $Ln(1-x)$ equals this series though, it must also be proven for which $x$ it is true that the limit of the lagrange error bound as the amount of terms $n \to \infty$, is $0$.
$$ \lim_{n \to \infty} M_n *\frac{(x-a)^n}{n!} $$
where $M_n$ is the greatest value of $f'^{n+1}$ between $a$ and $x$.

$|x|$ being $\lt 1$ in $Ln(1-x)$ amounts to $0 \lt x \lt 2$ in $Ln(x)$. So we make a taylor-expansion of $Ln(x)$ around $a=1$, and attempt to prove that the error bound approaches $0$ when $x \in (0, 2)$. It can easily be shown that the $n+1$'th derivative of $Ln(x)$ is:
$$ \frac{n!}{x^{n+1}} $$
If $x \ge 1$, then the biggest value this takes is when $x=1$, so simply $n!$. Substituting into the lagrange error bound we get:
$$ \lim_{n \to \infty} n!*\frac{(x-1)^n}{n!} = \lim_{n \to \infty} (x-1)^n $$
This does not reach $0$ if $x-1 \ge 1$, so $x$ must be less than $2$.

If $x \lt 1$, then the biggest value the $n+1'th$ derivative takes is when $x=x$, so $n!/x^{n+1}$:
$$ \lim_{n \to \infty} \frac{n!}{x^{n+1}}*\frac{(x-1)^n}{n!} =
\lim_{n \to \infty} \frac{(-1)^n}{x}*\frac{(1-x)^n}{x^n} $$

Taking the root test, we now get:
$$\lim_{n \to \infty} \left|\frac{-1}{x^{1/x}}*\frac{1-x}{x} \right| $$
$x^{1/x}$ approaches $1$, and the absoloute value signs from the root test remove the $-1$. Hence, for this to converge, it must be the case that:
$$\left|\frac{1-x}{x} \right| \lt 1 \to x \gt \frac{1}{2}$$
But then the interval on which the taylor expansion is valid is $(1/2, 2)$ and
not $(0,2)$. What am i doing wrong here?

Best Answer

It looks like you may have messed up the the error term - I am not sure how you got to the conclusion $x < 2$.

Define $$e_n(x) = \log(1-x) + \sum_{r=1}^{n-1} \frac{x^r}{r}$$

so that for $|x| < 1$, the Lagrange form for the exact error is given by $e_n(x) = \frac{d^{n}}{dx^{n}} \log(1-\xi) \frac{x^n}{n!}$ for some $\xi$ satisfying either $ 0 \leqslant \xi < x$ or $x < \xi \leqslant 0$. Now $$ \frac{d^n}{dx^n } \log(1-x) = - \frac{(n-1)!}{(1-x)^n} $$ so that, $$ e_n(x) = -\frac{x^n}{n} \cdot \frac{1}{(1-\xi)^n}.$$ Now, if $\xi$ is not known, when $0 \leqslant x < 1$ $$0 \leqslant \frac{x}{1-\xi} < \frac{x}{1-x}$$ so for positive $x$ the Lagrange error can only be sure to converge to zero if $0 \leqslant x < \frac{1}{2}$. In similar vein, for negative $x$ we can obtain $$\frac{|x|}{1-\xi}\leqslant |x|$$ and in this case we can be sure that the Lagrange error converges to zero for any $-1 < x \leqslant 0$. Taken together we can see that the Lagarange error will surely have limit zero for $-1 < x < \frac{1}{2}$.

This does not mean it does not converge to zeror for all $|x| < 1$, but only that the uncertainty as to the value of $\xi$ means it is not going to be easy to prove so when $x \geqslant \frac{1}{2}$.

There is another error form, which works for functions with a continuous $n$th derivative,

$$e_n(x) = \int_0^x \frac{f^{(n)}(t)}{(n-1)!}(x-t)^{n-1}~dt.$$

This is derived using integration by parts.

In our case, with $f(x) = \log(1-x)$, we will meet the continuous derivative requirement if we limit the interval $x\in(-1,1)$ to a closed sub-interval. Then, concentrating only on the harder case where $x \geqslant 0 $ $$ e_n(x) = \int_0^x -\frac{1}{(1-t)^n} \cdot(x-t)^{n-1} ~ dt.$$ Substitute $u = (x-t)(1-t)$ to obtain, \begin{align} e_n(x) = -\int_0^x \frac{u^{n-1}}{1-u} ~ du \tag{1}\label{Eq1} \end{align} which incidentally is exactly the same error estimate that would be obtained (with less effort if we started with $\log(1-x)$ as the integral of the geometric progression and estimated the error from the well known summation formula. Be that as it may, continuing from \eqref{Eq1}, $$|e_n(x)| \leqslant \frac{1}{1-x}\int_0^x u^{n-1} ~ du = \frac{x^n}{n(1-x)}$$ which now can be seen to have limit zero as $n \to \infty$, uniformly on a closed sub-interval of $(0,1)$.

Worth noting much of this could be obtained without resort to an exact error formula simply from the ratio test and the uniform convergence of all power series on a closed sub-interval of their circle of convergence.

Related Question