[Math] Spread polynomials

orthogonal-polynomials

Norman Wildberger's "rational trigonometry" has been viewed by some mathematicians as a clever new take on an ancient topic. Wildberger's "spread polynomials" $S_n$ are characterized by the identity
$$
\sin^2(n\theta) = S_n(\sin^2\theta)
$$
(except that Wildberger refuses to refer explicitly to the sine function in the definition and does it by other means). In one sense these are trivially equivalent to the Chebyshev polynomials $T_n$ characterized by
$$
\cos(n\theta) = T_n(\cos\theta).
$$
Wildberger notes that $1 – 2S_n(s) = T_n(1 – 2s)$.

In thinking about whether this polynomial sequence is even worth mentioning after Chebyshev polynomials have been treated, three questions come to mind:

  • Could it be that an essential difference justifying a separate treatment is the factorization of these polynomials? Wildberger factors the spread polynomials. Is there some important reason for doing that?

  • Is there a combinatorial interpretation of the coefficients? The coefficient of the $n$th-degree term in $S_n$ for $n=1,\dots,10$ is $n^2$ and the coefficient of the constant term is $4^{n-1}$. The first-degree term is $n/2$ times the constant term.

  • If there's no essential difference that justifies attending to the two sequences separately, the fact that the conventional way of viewing the sequence (Chebyshev polynomials) is chronologically first, doesn't mean it's necessarily better than Wildberger's way of viewing it (spread polynomials). Is it?

Best Answer

Regarding which point of view is preferable, I don't see that there's much difference between them. The Chebyshev polynomials map $[-1,1]$ to $[-1,1]$, the spread polynomials map $[0,1]$ to $[0,1]$, and they are conjugate under a linear map between $[-1,1]$ and $[0,1]$, so all their properties translate easily between the two frameworks.

However, I'd vote for Chebyshev polynomials as being somewhat more fundamental, due to orthogonality. The spread polynomials aren't orthogonal with respect to any measure, because they are nonnegative everywhere. To get orthogonality, one must subtract $1/2$, after which they become orthogonal with respect to $dx/\sqrt{x(1-x)}$ on the interval $[0,1]$. By contrast, the Chebyshev polynomials are already orthogonal with respect to $dx/\sqrt{1-x^2}$ on $[-1,1]$, with no subtraction needed. This isn't a big deal, since it just amounts to subtracting $1/2$, but it's nice not to have to do the subtraction.

Overall, there's nothing sacred about using the domain $[-1,1]$ for Chebyshev polynomials. Of course it aligns beautifully with trigonometry, but Chebyshev polynomials are important in many other settings (such as approximation theory) in which $[-1,1]$ plays no special role, and they are simply rescaled to fit the interval of interest. From that perspective, $[0,1]$ is just as good a domain. On the other hand, I see no gain from making the range $[0,1]$ as well, and one has to undo it to recover orthogonality.

Comments added in edit:

As for the factorizations, this amounts to factoring $T_n(x)$ (for Chebyshev polynomials) or $T_n(x)+1$ (for spread polynomials - not quite, see comments below). Both are interesting, since both the roots and the extrema of the Chebyshev polynomials are important.

In fact, $T_{2n}(x)= T_2(T_n(x))$ and hence $T_{2n}(x)+1 = 2T_n(x)^2$, so factoring spread polynomials includes factoring Chebyshev polynomials as the even-index case. (In the odd-index case, $T_{2n+1}(x)+1 = (T_{n+1}(x)+T_n(x))^2/(x+1)$, but I'm not certain how to interpret this.)

So I'd say factoring spread polynomials is more general but slightly more obscure. Definitely both are interesting, though.

There are combinatorial interpretations of the Chebyshev polynomials involving weighted monomer-dimer configurations (although the conditions are a little odd: see http://www.math.hmc.edu/~benjamin/papers/CombTrig.pdf). The analogous idea doesn't work out as nicely for spread polynomials, but maybe some other approach is more appropriate. It's worth noting that the Chebyshev polynomials have somewhat simpler coefficients. For example, $T_8(x)=128x^8-256x^6+160x^4-32x^2+1$ while $S_8(x)=-16384x^8 + 65536x^7 - 106496x^6 + 90112x^5 - 42240x^4 + 10752x^3 - 1344x^2 + 64x$.

Related Question