[Math] Minimal polynomial with a given maximum in the unit interval

na.numerical-analysisorthogonal-polynomialspolynomials

Find the lowest degree polynomial that satisfies the following constraints:

i) $F(0)=0$

ii) $F(1)=0$

iii)The maximum of $F$ on the interval $(0,1)$ occurs at point $c$

iv) $F(x)$ is positive on the interval $(0,1)$

The answer seems to depend pretty strongly on $c$. It's not difficult to find solutions for all $c$, but the solutions are not minimal. It seems like the solution involves Chebyshev polynomials, but I'm not familiar with them. Can anyone recommended a link?

Best Answer

I'll plunge in here.
Edit: I've now added a proof, using Brownian motion. (Earlier this was only justified heuristically) Further revised Jan 5, 2011, to correct typos/formatting and to promote material from comments.

The $n$th Chebyshev $T$-polynomials $T_n(x) = \cos(n \arccos( x))$ are the unique degree n polynomials that fold the interval $[-1,1]$ exactly over itself $n$ times, taking $1$ to $1$.

When $n$ is even, $T_n(-1) = 1$ as well, so by an affine transformation you can make it satisfy the inequalities of the question: $$f_n(x) = 1 - T(2 x - 1) . $$

Here for example is the plot of $f_{10}$:

alt text http://dl.dropbox.com/u/5390048/Chebyshev10.jpg

This has a maximum at $c = \cos(\pi/10) / 2 = .975528...$, pretty close to 1. (If a strict maximum is desired, modify the polynomial by adding $\epsilon x$, and change by a linear transformation of the domain so that $f(1)=0$.)

For odd degrees, you can restrict the Chebyshev polynomials to an interval from the first local maximum to 1, and renormalize in a similar way, to get the unique degree $n$ polynomial $V_n(x)$ that folds the unit interval exactly $n-1$ times over itself and has a double root at 0.

One way to know the existence of such a polynomial is this: extend the map of the interval to itself to all of $\mathbb C$ in a way that it is an $n+1$-fold branched cover of $\mathbb C$ (not yet analytic) with the branching pattern corresponds to the desired critical points and critical values. Pull back the complex structure in the range via this map. Using the Riemann mapping theorem, it can be seen that the resulting complex 1-manifold is $\mathbb C$, so you get a polynomial map. By symmetry, the critical points all lie on a straight line, so it can be renormalized as a map from the interval to itself. (This is a special case of more general theories, including Shabat polynomials and dessins d'enfants, as well as the theory of postcritically finite rational maps).

For odd degree $n$, I think $V_n$ probably gives the maximal $c$.

Here's a heuristic argument: One can ask, where are the critical points in $\mathbb C$ for a polynomial that satisfies the given constraints and maximizes $c$. Given the $n-1$ critical points $\{c_i\}$, the derivative of $f$ is up to a constant $\prod x-c_i$. To make the ratio large, we need the ratio of the mean value of the integrand in $[0,c]$ to be small compared to the mean value of - integrand in $[c,1]$: since the integrals add to 0, this ratio is the same as the ratio of arc length. This seems to say that the $c_i$'s want to be close to --- actually, inside--- the interval $[0,c]$. The best way to squeeze them in seems to be to make the interval fold as described.

It's easy to relax from the extreme case. For example, in even degrees, just make an affine change to a larger interval $f_n^{-1}( [1-t, \infty], t \ge 0$; as $t \rightarrow \infty$, $c \rightarrow .5$. For the odd degree examples, add a linear function and renormalize in a similar way.

This is reminiscent of the phenomenon of monotonicity in the theory of iterated real polynomials, but simpler to establish.

Added: a proof, using Brownian Motion

Proof that Chebyshev polynomials are optimal for even degrees

There's a way to formulate the problem as a probability question. If you start a Brownian path from a point near infinity in the plane, it almost surely eventually hits the line segment $J = [0,1]$. The position of $c$ in this line segment is determined by the ratio of the probability that the path first hits $[0,c]$ vs $[c,1]$. (To get the exact function, you can map the complement of the line segment coformally to the complement of a circle; on a circle, hitting measure is proportional to arc length).

Now suppose we have a degree $n$ polymomial $g$, as in the question, scaled so that $g(x)$ with $g([0,1]) = [0,1]$ and $g(0)=g(1) = 0$. As a complex polynomial, it defines a branched cover of $\mathbb C$ over $\mathbb C$. In 2 dimensions, conformal maps preserve the trajectories of Brownian motion: only the time parameter changes. Therefore, Brownian motion on the branched cover of the plane looks exactly like Brownian motion on the plane, but with the extra information of which of the $n$ sheets the trajectory is on at any given time. As the trajectory goes around the various critical values, the sheets are permuted. At any given time, if we just know the position of a Brownian path, the distribution on the sheets is uniform.

Let's denote by $J$ the unit interval $[0,1]$ in the domain (upstairs in the branched cover) and $K$ the same interval $[0,1]$ in the range (downstairs). Thus, $J$ is a union of line segments on sheets above $K$, and furthermore, the two subintervals of $J$, $[0,c]$ and $[c,1]$, both map surjectively to $[0,1]$.

Therefore, the first time the Brownian path downstairs crosses $[0,1]$, it has at least a $1/n$ probability of crossing the $[c,1]$ segment in the branched cover. For the even degree Chebyshev polynomial, as soon as it hits the segment $K$ downstairs it also hits $J$ upstairs, so the probability is exactly $1/n$: therefore, this is optimal. It is the unique optimal example for even degree, since if $g^{-1}(K) \ne J$, there would be a nonzero second chance for paths that hit $g^{-1}(K) \setminus J$ to continue on and still hit $[c,1]$.

The figures below illustrate this. The top figure shows the 6th Chebyshev polynomial, renormalized as above to take $[0,1] \rightarrow [0,1]$. The red interval is $J$, the caterpillar's skin is the inverse image of the circumscribed circle about $K$; also depicted is the inverse image of $\mathbb R$. Brownian motion starting at infinity has an equal probability of arriving in any of the 12 sectors (which each opens out under the map to a halfplane), so the probability of arriving in the leftmost segment is exactly 1/6, with length $(1-\cos(\pi/6))/2 = .0669873\dots$.

alt text http://dl.dropbox.com/u/5390048/Chebyshev6.jpg

The next figure (below) shows a comparison polynomial (also graphed, futher down) with an order 5 zero at 0 and one other critical point at $c$, mapping with critical value $1$. The same data is shown. When a Brownian path starting from infinity first hits $K$ (downstairs), it has equal probability of hitting any of the 6 segments inside the various curves: one of the two red segments (in $J$), or the vein in one of the four leaves. In the 4/6 probability event that it does not hit $J$, when the Brownian path continues on it has some chance of hitting the top interval, so this probability is strictly greater than $1/6$.

alt text http://dl.dropbox.com/u/5390048/MonobumpFlower.jpg

Proof that Chebyshev polynomials are optimal for odd degree

For the odd degree case, a little more is needed. Since the requirement of the question is that $g(0)=g(1)=0$, there are an even number of sheets above any point of $K$, so at least one sheet is absent in $J \setminus g^{-1}(K)$. Let's suppose first that $g^{-1}(K)$ is connected. In that case, we can use the Riemann mapping theorem to map its complement conformally to the exterior of a unit disk; there is a set of measure at least $2\pi/n$ that does not map to $J$. We can follow Brownian motion by letting it "reflect" whenever it hits this portion of the boundary of the unit disk, and continue on until it hits a part that corresponds to $J$. With this formulation, it's obvious that to minimize the probability that the continuing trajectory hits the sensitive area corresponding to $[c,1]$, we need to minimize its length and put it as far away from the sensitive area as possible. That's exactly what happens for $V_n$: on the circle the extra sheet is antipodal to the sensitive sheet.

A similar argument applies to the disconnected case, although without quite as simple a visual representation. It's easy to establish that any optimal polynomial must have the maximal number of sheets above each point in $K$. The hitting probability for the senstive area for random walks starting at points $z$ is a harmonic function on $\mathbb C \setminus J$, with limit 1 along $[c,1]$ and 0 along $[0,c]$. This harmonic function has no critical points, so if there is a component of $g^{-1}(K)$ not attached to $J$, its mean on this component can be reduced by moving it toward 0, by moving the critical values not on the $K$ toward $0$. Below is a picture for the optimal solution for degree 5. There's a short tail to the caterpillar where the Brownian path gets a second chance, but its far away from the sensitive portion so it has only a small chance to next hit there rather than in $[0,c]$. The interval $[c,1]$ is comparatively short because it is exposed out at the end of the interval, but not as exposed so not as short as in the Chebyshev case.

alt text http://dl.dropbox.com/u/5390048/Chebyshev5.jpg

The Constants

When $n$ is even, there is a solution for $c$ between $(1āˆ’\cos(\pi/n))/2$ and $(1+\cos(\pi/n)/2$. If $n$ is odd, there is a solution for $c$ between $(1āˆ’\cos(\pi/n))/(1+\cos(\pi/n))$ and $2 \cos(\pi/n)/(1+\cos(\pi/n))$. Numerically, the low values for $c$ are {2, 0.5}, {3, 0.333333}, {4, 0.146447}, {5, 0.105573}, {6,0.0669873}, {7, 0.0520951}, {8, 0.0380602}, {9, 0.0310912}, {10, 0.0244717}

**End of added proof **

Polynomials with a unique local maximum

For comparison, here are plots of degree $n$ polynomials functions (unique up to a constant) that have an $n-1$-fold root at 0, a critical point at $c$, and take value $0$ at 1. At first I guessed that these might give the optimal $c$, but for them, $c = 1-1/n$, much smaller than for the Chebyshev polynomials. The plots are for $n = 2, 3, \dots, 10$.

alt text http://dl.dropbox.com/u/5390048/InterpolatingFunctions.jpg

However, these polynomials answer a different question: given c, what is the minimum degree of a polynomial that is 0 at 0 and 1 and has a unique local maximum at c. These polynomials, also discussed by Wadim Zudilin, have that property. For such a polynomial, the same technique as above can be used. For any candidate polynomial f, a Brownian path starting at infinity has a probability of a probability of 1/n to hit the interval [0,c], 1/n probability to hit [1,c], and (nāˆ’2)/n to first hit elsewhere on fāˆ’1([0,1]). The same proof shows these examples are optimal