[Math] Gaps between roots of trigonometric polynomials

na.numerical-analysispolynomials

[Cross-posted from Math.SE because I got no responses there.]

Given a polynomial in $e^{\mathrm{i}k t}$ of the form
$$ p(t) = \sum_{-n\leq k\leq n} c_k e^{\mathrm{i}k t} $$
with $\bar c_{-k} = c_k$, is there a good way of characterising how close its roots can be in terms of its coefficients?

Clearly this depends on coefficients, for example $1-\epsilon-\cos x$ has roots $\sqrt{\epsilon}$ apart, but for general coefficients, is there any good result?
I am secretly hoping for minimum distance of something like $A n^{-1}$ (which is the case for $\sin n x$) for non-almost-degenerate coefficients.

If you know a result for any kind of orthogonal polynomials (like Chebyshev polynomials), that would also be very helpful. Also, I know there are some results for asymptotic distributions of eigenvalues of Toeplitz matrices, but I am interested in a result for a fixed polynomial $p$ and its coefficients, and not just $n\to\infty$.

Edit. I am mainly interested in an algorithm that, given a polynomial as above, computes the shortest gap between two adjacent roots of the polynomial.

The motivation is as follows. The simplest method for calculating a root of a function numerically is the bisection method, which only works so long as you know an interval that brackets a root (an interval on which the function changes sign). In general, calculating all roots of a black-box function is really hard. So without going to more involved algorithms, the simplest algorithm to find all (simple) roots must be this one: find the shortest distance $\delta$ between adjacent roots, sample the function in intervals of length $\delta$, find the root in each interval with a sign change in it. Of course, this doesn't work without knowing $\delta$. This is not a brilliant algorithm, there are better ones out there, but I am interested in finding out how I could make it work.

So the question is: given a trigonometric polynomial with only simple roots, how can I calculate the shortest distance between adjacent roots, so that the above scheme would succeed?

Example. If $t_{1,2}$ are two roots, I can write
$$ 2\max_{[t_1,t_2]}|f(t)| \leq \int_{t_1}^{t_2}|f'(t)|\,dt \leq |t_2-t_1|\max_{[t_1,t_2]}|f'(t)|. $$
So at least
$$ |t_1-t_2| \geq 2\frac{\max_I |f|}{\max_I |f'|}. $$
Then by Bernstein's inequality,
$$ \max_I |f'| \leq \|f'\|_\infty \leq n \|f\|_\infty, $$
so
$$ |t_1-t_2| \geq \frac2m \frac{\max_I|f|}{\max_{[-\pi,\pi]}|f|}. $$
But then I don't know quite how to deal with the right-hand side.

Best Answer

The technical term for what you want to do is root isolation or root bracketing. One way to approach this to find the minimal distance between the roots, like you are suggesting, and also a large enough bounded interval to contain all the roots. This idea was in fact used early on in the history of root isolation for real polynomials. However, these techniques have gotten more sophisticated with time. I imagine the situation would be similar for trigonometric polynomials.

Here's a reference that seems to discuss root isolation precisely for trigonometric polynomials:

Real zero isolation for trigonometric polynomials, by Achim Schweikard
ACM Transactions on Mathematical Software 18 350-359 (1992)
http://dx.doi.org/10.1145/131766.131775

Related Question