Numerical Methods – Newton’s Method for Roots of Polynomials

multivariable-calculusnewton raphsonnumerical methods

The standard way to use Newton's Method for finding a root of a polynomial $p(x)$ is to use the iteration formula $$x_{n+1}=x_n-{p(x)\over p'(x)}$$ I recently thought of a new way of finding the roots. Letting $x_1,…,x_n$ denote the $n$ roots and setting $p(x)=x^n+\cdots +a_1x+a_0$, we have the formulas $$x_1+\cdots +x_n=-a_{n-1}$$ $$x_1x_2+\cdots +x_{n-1}x_n=a_{n-2}$$ and so on to $$x_1x_2\cdots x_n=(-1)^na_0$$ This gives a non-linear system of $n$ variables and $n$ equations, which can be solved using the multivariable version of Newton's method. We would use the iteration $$\bf x_{n+1}=x_n-[Df(x_n)]^{-1}f(x_n)$$ where $$f(x_1,…,x_n)=\begin{bmatrix}x_1+\cdots +x_n+a_{n-1} \\ x_1x_2+\cdots +x_{n-1}x_n-a_{n-2} \\ \vdots \\ x_1\cdots x_n-(-1)^na_0 \end{bmatrix}$$ and $Df$ is the derivative matrix of $f$.

My question is: is there any advantage to using this method for finding roots? The obvious plus is that using the multivariable Newton's method provides all $n$ roots simultaneously while the regular method only gives them one at a time. However, I'm also interested in how the rate of convergence of this method would compare with the other, and if one would tend to be more numerically stable than the other.

Best Answer

Congratulation, you have reinvented the Durand-Kerner method (1960) that was developed by Weierstraß in 1891.

It has quite satisfying convergence, but may show a rather slow initial phase until the root approximations are all close enough to the exact roots. (Once, there were Java applets, but since they are now virtually unusable, and even sometimes no longer to be found, that demonstrate that nicely, especially for symmetric configurations of roots and initial values.)