Numerical methods for finding the roots of $f(x)=\left(\cos{\frac{33}{x}\pi}\right) (\cos{x \pi})-1$

functionsnumerical methodsrootstrigonometry

I have a trigonometric function; for instance
$$f(x)=\left(\cos{\frac{33}{x}\pi}\right) (\cos{x \pi})-1$$

I wanted to know the zeroes of this particular function, so I thought I could look into some root-finding algorithms (Newton's, Halley's, Secant…). However, they don't seem to work as $f'(x)=0$ at the roots of $f(x)$, so all those methods are not guaranteed to converge.

So, I was thinking, is there some type of root-finding algorithm for this particular trigonometric equation? Or at least transform this equation into one that the roots would go through the x-axis rather than "bouncing" off it, so Newton's method would apply.

Also, I am focused on roots $>1$ and $<33$.

Note: Although the given example can be solved with trigonometric techniques, I am specifically looking for numerical methods. The example was chosen to make it easy to check the roots. I can generalize it to say for any $$f(x)=\left(\cos{\frac{n}{x}\pi}\right) (\cos{x \pi})-1$$ and an interval $$[a,b]$$ where there is only one root in that interval, is there a way to use numerical methods that are guaranteed to converge at the root to find that root?

Best Answer

The roots have multiplicity

The situation for the given function is that the roots are at the same time maxima of the function, that is, they have multiplicity $2$, as $$ f(x)=\left(1-2\sin^2\frac{33\pi}{2x}\right)\left(1-2\sin^2\frac{\pi x}{2}\right)-1 $$ so after expanding $-f(x)$ is a sum of squares minus the product of these terms. Methods that are developed for finding single roots will either slow down or fail to converge at roots of higher multiplicity. Newton's and Halley's method slow down.

There are many local extrema

Another problem with applying Newton is that this function has many local maxima and minima at small $x$ due to the first factor. There the derivative is zero, so that the Newton step, considered as function of $x$, has as many poles. Any improved method based on Newton's method will have as many or more poles, even if locally around the roots of $f$ the convergence is better.

enter image description here

Note that at a double root, where locally $f(x)=c(x-r)^2$, the Newton step maps $x$ to $\frac{x+r}2$ and the Halley step to $\frac{x+2r}3$. In the plots, this is somewhat visible around the roots $x=3$ and $x=11$.

Modifying Newton's method

Knowing this and the possibility of a double root, one can change the Newton step to alternate steps of single and double step size. Then at simple roots the single step will reduce the distance to the root quadratically while the following double step will overshoot the root, however with a smaller step size. At a double root the single step will reduce the distance by half, while the following double step will restore quadratic convergence. In each case, the "wrong" step does not make the situation worse, while the "right" step proceeds with the expected quadratic convergence.

Finding roots inside intervals

If an interval is small enough for a given function, then it either has no root inside the interval or it is contained in the basin of attraction of the root inside. Finding a subdivision of a given interval that is fine enough is again a heuristic task.

  • When performing the iteration, if it leaves the given small interval then it has failed with a high probability of no root inside.
  • Another failure condition is that the iteration enters a cycle. There might be a root inside the span of the cycle, but for simplicity let the iteration fail if after a small number of iterations the step size is not small relative to the interval length. With a good probability this means that the subdivision is not fine enough
  • The convergence should now be at least linear, reducing the step size be one half each step. To guard against strange floating point effects, stop based on the iteration count after a number of iterations that theoretically should be sufficient to reach the desired accuracy.
  • Of course, also stop if the desired accuracy is reached.

As a python code, this could look like

def find_roots(method,a,b,segments=10):
    seg = np.linspace(a,b,segments+1);
    for k in range(segments):
        ak, bk = seg[k:k+2]; 
        #print "searching for roots in",[ak,bk]
        x = (ak+bk)/2;
        count = 0;
        while ak<=x<=bk and count < 50:
            count += 1;
            xold, x = x, method(x);
            #print x
            if count==2 and abs(x-xold)>1e-1*(bk-ak): break;
            if abs(x-xold)<1e-8:
                y,_,_ = f(x)
                print "found root x=%.15f with f(x)=%.8e in %d iterations"%(x,y,count);
                break;

Called as find_roots(method,2,12,segments=14) this returns the results

find roots with Newton step
found root x=3.000000007315551 with f(x)=-3.77475828e-15 in 23 iterations
found root x=10.999999991701889 with f(x)=-3.33066907e-16 in 23 iterations
find roots with Halley step
found root x=3.000000004913715 with f(x)=-1.66533454e-15 in 15 iterations
found root x=10.999999999234854 with f(x)=0.00000000e+00 in 16 iterations
find roots with Newton plus double Newton step
found root x=2.999999999980970 with f(x)=0.00000000e+00 in 4 iterations
found root x=10.999999999997232 with f(x)=0.00000000e+00 in 3 iterations

Note that in the last method, each iteration contains two Newton steps. If one counts the effort in function evaluations, then Newton gets a factor of $2$, Halley a factor of $3$, and the double step method a factor of $4$, giving the first two methods a similar complexity.

Appendix: More code

The method steps are standard implementations

def Newton_f(x): vf, df, _ = f(x); return x-vf/df

def Halley_f(x): vf, df, ddf = f(x); return x-(vf*df)/(df**2-0.5*vf*ddf)

def TwoStep_f(x):
    vf,df,_ = f(x);
    x = x - vf/df;
    vf,df,_ = f(x);
    return x - 2*vf/df;

The function implementation provides also the first and second derivative à la algorithmic differentiation (AD) in forward mode

def f(x):
    v1 = 33*np.pi/x; dv1 = -v1/x; ddv1 = -2*dv1/x;
    v2 = np.cos(v1); v3 = np.sin(v1); 
    dv2 = -v3*dv1; dv3 = v2*dv1; 
    ddv2 = -dv3*dv1-v3*ddv1; ddv3 = dv2*dv1+v2*ddv1;
    v4 = np.pi*x; dv4 = np.pi; ddv4 = 0;
    v5 = np.cos(v4); v6 = np.sin(v4); 
    dv5 = -v6*dv4; dv6 = v5*dv4;
    ddv5 = -dv6*dv4-v6*ddv4; ddv6 = dv5*dv4+v5*ddv4;

    return v2*v5-1, dv2*v5+v2*dv5, ddv2*v5+2*dv2*dv5+v2*ddv5;

The call of the root finder procedure is

names = ["Newton step", "Halley step", "Newton plus double Newton step"]
for k, method in enumerate([Newton_f, Halley_f, TwoStep_f]):
    print "find roots with %s"%names[k];
    find_roots(method,2,12,segments=14)
Related Question