Real Analysis – Rate of Convergence of Modified Newton’s Method for Multiple Roots

numerical methodsreal-analysis

I've got a problem with a modified Newton's method. We've got a function $f \in C^{(k+1)}$ and $r$ which is it's multiple root of multiciplity $k$. Also $f^{(k)}(r) \neq 0$ and $f'(x) \neq 0 $ in the neighbourhood of $r$.

The modified Newton's method is
$$x_{n+1} = x_n – k\frac{f(x_n)}{f'(x_n)}$$
How to prove that $x_n$ converges to $r$ quadratically?

I found one method using a function $G(x) = (x-r) f'(x) – kf(x)$ but then it's said that $ G^{(k+1)}(r) \neq 0$ but it comes from the fact that $f^{(k+1)}(r) \neq 0$, but it doesn't necessarily have to be true, am I right? We know that $f^{(k)}(r) \neq 0$, but we have no information about the $k+1$st derivative. I would appreciate if somebody explained to me precisely, why it quadratically converges.

Best Answer

If $k=1$, you have a simple root, your iteration reduces to Newton's method and we know that in that case, Newton's method converges quadratically.

If $k > 1$, you have a multiple root and if $f$ has a root of multiplicity $k$ at $r$, it can be written in the form $f(x) = (x-r)^k h(x)$ where $h(r) \neq 0$.

Now your iteration is a fixed-point iteration of the form $x_{k+1} = g(x_k)$ where $g(x) := x - k f(x)/f'(x)$. Using the expression of $f$ above, you can check that $g(r) = r$, which is why $g$ is called a fixed-point function---it does not move $r$---and $r$ is called a fixed point of $g$. The thing with fixed-point iterations is that they converge (to a fixed point of $g$) provided $|g'(r)| < 1$ and you initialize them sufficiently close to $r$. If you denote $e_n := (x_n -r)$ the error at the $n$-th iteration, then, using a Taylor expansion, you get $$ e_{n+1} = x_{n+1} - r = g(x_n) - r \approx g(r) + g'(r) (x_n - r) - r = g'(r) e_n $$ because $g(r) = r$. So essentially, the error is reduced by a factor $|g'(r)|$ at each iteration when you're sufficiently close to $r$. If $g'(r) \neq 0$, this is called linear convergence. But what if $g'(r) = 0$? You just perform a second-order expansion instead of a first-order expansion and you get $$ e_{n+1} \approx g(r) + g'(r) e_n + \tfrac{1}{2} g''(r) e_n^2 -r = \tfrac{1}{2} g''(r) e_n^2, $$ and now the error is (roughly) squared at each iteration. This is called quadratic convergence. (If $g''(r)=0$, you go to third order, etc.)

Why am I explaining all this? If you want to show that your modified Newton iteration converges quadratically, you can try to show that $g(r)=r$ and $g'(r) = 0$.

It's exactly the case with your iteration and it's relatively easy if you write $f(x) = (x-r)^k h(x)$. Compute $f'(x)$, $f''(x)$, $g'(x)$, simplify, and then plug $x=r$. You'll see that $g'(r)=0$.