[Math] Prove that Aitken’s method improves the speed of convergence

convergence-divergencenumerical methods

About Aitken's $\Delta^2$ Method:

The objective is to find the fixed point of a function. But with the other methods, we may have a sequence which converges to our fixed point very slowly. The Aitken method provides a new sequence with respect to the previous one, which converges to the fixed point faster. If we denote the previous sequence by $\{\ p_n \}$, the new sequence which Aitken method gives us is defined as below:

$\large\hat{p}_n := p_n – \frac{(p_{n+1}-p_n)^2}{p_n-2p_{n+1}+p_{n+2}}$


Question:

Suppose that $\{p_n\}$ is a sequence that converges linearly to $p$ and for all sufficiently large values of $n$ we have $(p_n-p)(p_{n+1}-p) \gt 0$ . Then the sequence $\{ \hat{p}_n \}$ generated by Aitken's $\Delta^2$ method converges to $p$ faster than $\{p_n\}$ in the sense that :
$\operatorname{lim}_{n\to\infty} \frac{\hat{p}_n-p}{p_n-p}=0$


My try:

$\large\operatorname{lim}_{n\to\infty} \frac{\hat{p}_n-p}{p_n-p} = \operatorname{lim}_{n\to\infty} \frac{p_n – \frac{(p_{n+1}-p_n)^2}{p_n-2p_{n+1}+p_{n+2}}-p}{p_n-p} =1- \operatorname{lim}_{n\to\infty} \frac{\frac{(p_{n+1}-p_n)^2}{p_n-2p_{n+1}+p_{n+2}}}{p_n-p}$

So, it suffices to show that $\operatorname{lim}_{n\to\infty} \frac{\frac{(p_{n+1}-p_n)^2}{p_n-2p_{n+1}+p_{n+2}}}{p_n-p}=1$

But how can we continue this? It seems like a dead-end…

Any idea?


Note: The question is picked from this link. (Page 3)

Best Answer

If your fixed point iteration is $x_+=f(x)$, then you need to consider that the sequence produced by the Aitken iteration restarts the slower iteration at each new value. That is, after $\hat p_n$ is computed, the next step starts with a sequence $p_n^0=\hat p_n$, $p_n^1=f(p_n^0)$, $p_n^2=f(p_n^1)$ and then computes $$ \hat p_{n+1}=p_n^0-\frac{(p_n^1-p_n^0)^2}{p_n^2-2p_n^1+p_n^0} $$ If you do not take this into account, the sequence $\hat p_n$ is "riding piggy-back" on the sequence $p_n$, $\hat p_n$ being computed from the triple $p_n,p_{n+1},p_{n+2}$. This will eliminate the leading geometric term of the error, to be replaced by another geometric leading term.


Writing $g(x)=f(x)-x$, the Aitken iteration can also be written via the formula of Steffensen's method $$ \hat p_{n+1}=\hat p_n-\frac{g(\hat p_n)^2}{g(\hat p_n+g(\hat p_n))-g(\hat p_n)} $$ where the superlinear convergence follows from its closeness to Newton's method.


It may help to express the error iteration in the errors $e_n=p_n-p$ so that then in your notation $$ \frac{\hat p_n-p}{p_n-p}=1-\frac{(e_{n+1}-e_n)^2}{e_n(e_{n+2}-2e_{n+1}+e_n)} =\frac{e_ne_{n+2}-e_{n+1}^2}{e_n(e_{n+2}-2e_{n+1}+e_n)} $$ Now if $e_{n+1}=qe_n+o(e_n)$ then $e_{n+2}=q^2e_n+o(e_n)$ and $$ \frac{\hat p_n-p}{p_n-p}=\frac{q^2e_n^2-q^2e_n^2+o(e_n^2)}{e_n(q^2e_n-2qe_n+e_n+o(e_n))}=\frac{o(1)}{(1-q)^2+o(1)} $$ This means by the definition of the Landau notation that $\lim_{n\to\infty}\frac{\hat p_n-p}{p_n-p}=0$. The piggy-backed Aitken series converges faster than the original series. But how much faster?


If the iteration formula $f$ is twice differentiable, then considering the quadratic Taylor expansion in $p$ gives $$e_{n+1}=f(p+e_n)-p=qe_n+re_n^2+o(e_n^2)$$ Then the next iterate is $$ e_{n+2}=q^2e_n+qre_n^2+rq^2e_n^2+o(e_n^2) $$ and thus the result of the Aitken formula \begin{align} \hat p_n-p &=\frac{q^2e_n^2+r(q+q^2)e_n^3-(q^2e_n^2+2rqe_n^3)+o(e_n^3)}{(q^2e_n+qre_n^2+rq^2e_n^2)-2(qe_n+re_n^2)+e_n+o(e_n^2)}\\[1em] &=\frac{-rq(1-q)e_n^3+o(e_n^3)}{(1-q)^2e_n+r(q+q^2-2)e_n^2+o(e_n)} \\[1em] &=-\frac{rq}{(1-q)-(2+q)re_n}e_n^2+o(e_n^2). \end{align} This result gives two insights. First, the convergence of the piggy-backed series $\hat p_n$ is still linear, however now with the smaller quotient $q^2$. The second insight is that $\frac{\hat p_n-p}{(p_n-p)^2}\to-\frac{rq}{1-q}$ is bounded, which can be interpreted as quadratic convergence for the continuously restarted Aitken iteration.

Related Question