[Math] Relationship between Newton’s method an fixed-point iteration

newton raphsonnumerical methods

The formula for Newton's iteration method (which is a zero-finding problem) is $$ x_{k+1}=x_{k} – \frac{f(x_k)}{f'(x_k)}$$

I read in my textbook that this can be also be seen as a fixed-point iteration, where the zero of this function $f$ is a fixed point for another function $g$. Tha is, something like this: $$g(x)=x – \frac{f(x)}{f'(x)}$$

My question is, why is this useful?

For example if we wanted to prove that a function $f(x)=x^2-2$ converges in the interval $[1,2]$, how could the fixed point property be useful? I don't see what the point is, and how it is different than just using Newton's method.

Best Answer

A lot is known about fixed point iterations, and this can be applied to the case of the Newton iteration. "Just using Newton's method", you may be able to tell what happens when you start at a particular initial point, but how can you tell whether it will converge for all initial points in a certain interval? Using the theory of fixed point iterations, this may be possible.

For example, here's one of my favourite results. Say you're using Newton's method to solve $f(x) = 0$, and $x=r$ is one solution. What is the largest interval around $r$ such that if you start in that interval, Newton's method always converges to $r$? This interval will be of the form $(a,b)$, where there are just four possibilities:

  1. $a = -\infty, b = +\infty$.
  2. $a = -\infty, b$ is finite, where $f'(b) = 0$ and $\lim_{x \to b-} g(x) = -\infty$.
  3. $a$ is finite, $b = +\infty$, where $f'(a) = 0$ and $\lim_{x \to a+} g(x) = +\infty$.
  4. A two-cycle: $g(a) = b$, $g(b) = a$.
Related Question