Efficiency of a root finding algorithm

computational mathematicsroots

I'm writing some code regarding polynomials and I have a few questions about a root-finding algorithm I've come up with based on my current knowledge. Moreover,

Given a polynomial, I would like to find all real-valued solutions on the closed interval $[a,b]$. Currently, I have a function that can count the total number of roots on $[a,b]$ by Sturm's Theorem and I want to take advantage of this.

My current approach is something like the Bisection Method:

  1. Recursively bisect $[a,b]$ into smaller sub-intervals until each sub-interval has either one or zero roots.
  2. In the case that a sub-interval has one root, find it with Newton's Method (or any other single-root finding method). Otherwise, the sub-interval does not contain a root and it is ignored.

My questions are: How is my approach to the problem? Are there any optimizations? And, does this method have a name? I would love to have a more formal method to go about doing this.

Best Answer

What you describe is a reasonable approach.

Your first step is often called root isolation, and your second step is (in a sense) root polishing. You can Google these terms to learn more.

In your second step, the end-points of your interval bracket a root. It’s a good idea to use a polishing algorithm that keeps the root bracketed. The standard Newton-Raphson algorithm does not preserve bracketing, but it’s easy to adapt it so that it does. There’s a decent implementation in the Numerical Recipes book.