[Math] Bounding the basins of attraction of Newton’s method

basins-of-attractiondynamical systemsnewton raphsonnumerical methods

In general, Newton's method for root finding has a "bubbly" boundary between basins of convergence for different roots. This is where fractals are usually created from. But outside these "bubbly" boundaries there are very clear areas where there's no ambiguity about which root Newton's method will converge to.

Is it possible to build conservative bounds around the basins of converge, where you can say that if you start inside that basin you will definitely converge to a root? It might still be possible to converge to the root outside this bound, if you start in the "bubbly" region, so the bound won't be tight…

But is there a way to use information about a function to set up bounds where you know that you're sufficiently close to your root for Newton's method to converge, and specifically to only converge to your specific root of interest?

Basically, I'm trying to explore numerical methods for finding all the roots of a function in some given interval, including even roots. Since you can't really bracket an even root and use bisection, Newton's method becomes a good choice, but only if the iterations will actually converge.

I have a method in mind ("conservative advancement") for approaching a known even root, which takes safe step sizes based on the max error bounds from a Taylor approximation and how far you are from 0. This takes large step sizes when you're far away from 0, but as you approach 0 for non-simple roots, it converges really slowly (it might possibly take millions of iterations). Newton's method converges much quicker, even if its convergence is only linear for non-simple roots. But I want to make sure that it will converge to only the root I actually am trying to get to, and not skip the nearest root and instead converge to another root farther away, since I wouldn't really have a way to detect that I missed a root.

My guess is that you need to know the roots a priori to be able to bound the basins of convergence (something sort of like a Voronoi region around the roots, probably). But I haven't read any literature about it either way, so I'm curious if there's any known tricks.

Best Answer

Chee Yap's book on "Fundamental Problems of Algorithmic Algebra", chapter 6 deals with this problem precisely.

Yap cites Friedman's "On the convergence of Newton's method." and Smale's "The fundamental theorem of algebra and complexity theory" as references for the following result (Yap, Theorem 6.37):

Let $f(X) \in \mathbb{Z}[X]$ be square free with $m = \deg f$ and $M = 1 + \|f\|_{\infty}$. Then Newton iteration for $f(X)$ is guaranteed to converge to root $X^*$ provied the initial approximation is at most $$\delta_0 = [m^{3m+9}(1+M)^{6m}]^{-1} $$ from $X^*$

Where $\|f\|_{\infty}$ is the maximum coefficient of the polynomial $f(x)$.

Hope that helps.