A continuous function on $[0,1]$ orthogonal to each monomial of the form $x^{n^2}$

orthogonal-polynomialssequences-and-seriesweierstrass-approximation

Let us consider the continuous functions over $[0,1]$ fulfilling
$$ \int_{0}^{1} f(x) x^n\,dx = 0 $$
for $n=0$ and for every $n\in E\subseteq\mathbb{N}^+$. The Müntz–Szász theorem gives that
$$ \sum_{n\in E}\frac{1}{n} = +\infty \Longleftrightarrow f(x)\equiv 0 $$
so there is a non-zero continuous function $f(x)$ such that
$$ \int_{0}^{1} f(x) x^{n^2}\,dx = 0 \tag{1}$$
holds for every $n\in\mathbb{N}$.

Question: can we construct a nice, explicit function $f\neq 0$ fulfilling $(1)$ for every $n\in\mathbb{N}$?

We may consider functions of the form
$$ f(x) = \sum_{n\geq 0} c_n P_n(2x-1) $$
with $P_n(2x-1)$ being the $n$-th shifted Legendre polynomial.
The orthogonality to $1$ and $x$ translates into $c_0=c_1=0$, the orthogonality to $x^4$ translates into $\frac{2}{35}c_2+\frac{1}{70}c_3+\frac{1}{630}c_4=0$, the orthogonality to $x^9$ translates into $\frac{3}{55}c_2+\frac{21}{715}c_3+ \frac{9}{715}c_4 +\frac{3}{715}c_5+\frac{3}{2860}c_6+\frac{9}{48620}c_7+\frac{1}{48620}c_8+\frac{1}{923780}c_9=0$ and so on. The minimal (with respect to the $\ell^2$ norm) solution of this infinite system with $c_2=1$ (or $c_4=1$) should give a sequence $\{c_n\}_{n\geq 0}$ ensuring the continuity of $f(x)$, but this is non-trivial and I would appreciate a more explicit construction / example of such $f$.

Addendum: another possible construction is to apply the Gram-Schmidt process to $\{1,x,x^4,x^9,\ldots\}$ in order to get a sequence of polynomials $\{p_n(x)\}_{n\geq 0}$ such that

  • $p_n(x)=\sum_{k=0}^{n} c_k x^{k^2}$
  • $n\neq m \Longrightarrow \langle p_n(x),p_m(x)\rangle = 0$
  • $\max_{x\in [0,1]} |p_n(x)| = 1$ or $\langle p_n(x),p_n(x)\rangle = 1$

then take $f(x)$ as the pointwise limit of a convergent subsequence of $\{p_n(x)\}_{n\geq 0}$.
Still, not really explicit.

A more promising approach is to consider some lacunary Fourier series, like
$$ g(\theta) = \sum_{n\geq 1}\frac{\cos(n\theta)}{n^2} – \sum_{n\geq 1}\frac{\cos(n^2\theta)}{n^4}, $$
which clearly fulfills $\int_{-\pi}^{\pi}g(\theta)\cos(n^2\theta)\,d\theta = 0$, then turn such $g(\theta)$ into an $f(x)$ fulfilling $(1)$ via some slick substitution.

Yet another way is to consider the inverse Laplace transform of
$$ \frac{1}{s}\prod_{k=0}^{n}\frac{k^2+1-s}{k^2+1+s} $$
evaluated at $-\log x$. This gives a polynomial, bounded between $-1$ and $1$, which is orthogonal to $1,x,x^4,\ldots,x^{n^2}$. Is this sequence of polynomials (or a subsequence of this sequence) convergent to a continuous function? I do not know. If so,
$$ f(x)=\mathcal{L}^{-1}\left(\frac{\sin(\pi\sqrt{s-1})}{\sqrt{s-1}}\cdot\frac{\sqrt{s+1}}{\sin(\pi\sqrt{s+1})}\cdot \frac{1}{s}\right)(-\log x)$$
is an explicit solution. Here it is a plot of the first polynomials produced by the last approach:

enter image description here

Best Answer

Short Answer. Expanding @orangeskid's answer, let

$$ F(x) := \frac{1}{\Psi_{\infty}(0)^2} - x - \frac{1}{\Psi_{\infty}(0)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2 \operatorname{sinhc}(\pi\sqrt{k^2 + 3})}{(k^2 + 1)(k^2 + 2)} x^{k^2+2}, $$

where $\operatorname{sinhc}(x) = \frac{\sinh x}{x}$ and

$$ \Psi_{\infty}(0) = \frac{\operatorname{sinhc}(\pi\sqrt{p+1})}{\operatorname{sinhc}(\pi\sqrt{p})}. $$

Although the above series converges only for $x \in [0, 1)$, we can prove that $F$ extends to an absolutely continuous function on $[0, 1]$ by setting $F(1) = 0$ and satisfies

$$ \int_{0}^{1} F(x) x^{n^2} \, \mathrm{d}x = 0 $$

for any $ n = 1, 2, \ldots$ Below is the graph of $f(x)$ on $[0, 1]$:

graph of F


Proof of the claim.

Step 1. Consider a sequence $-\frac{1}{2} < \alpha_1 < \alpha_2 < \ldots$. Also, we define the function $f_n$ by

\begin{align*} f_n(x) &:= \frac{ \begin{vmatrix} 1 & x^{\alpha_1} & \cdots & x^{\alpha_n} \\ \langle 1, t^{\alpha_1} \rangle & \langle t^{\alpha_1}, t^{\alpha_1} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_1} \rangle \\ \vdots & \vdots & \ddots & \vdots \\ \langle 1, t^{\alpha_n} \rangle & \langle t^{\alpha_1}, t^{\alpha_n} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_n} \rangle \end{vmatrix} }{ \begin{vmatrix} \langle t^{\alpha_1}, t^{\alpha_1} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_1} \rangle \\ \vdots & \ddots & \vdots \\ \langle t^{\alpha_1}, t^{\alpha_n} \rangle & \cdots & \langle t^{\alpha_n}, t^{\alpha_n} \rangle \end{vmatrix} }, \end{align*}

where $\langle g(t), h(t) \rangle = \int_{0}^{1} g(t)h(t) \, \mathrm{d}t$ denotes the inner product on $L^2([0,1])$. Then, as explained in @orangeskid's answer, for $\alpha > -\frac{1}{2}$ with $\alpha \notin \{\alpha_1, \alpha_2, \ldots\}$,

$$ x^{\alpha} = f_n + (x^{\alpha} - f_n) $$

is an orthogonal decomposition of $x^{\alpha}$, where $x^{\alpha} - f_n \in V_n := \operatorname{span}(x^{\alpha_1}, \ldots, x^{\alpha_n})$ and $f_n \perp V_n$. Since $V_n$ is increasing in $n$, this implies that $f_n$ converges in $L^2([0,1])$. Moreover,

$$ \|f_n\|^2 = \operatorname{dist}(t^{\alpha}, V_n)^2 = \frac{G(t^{\alpha}, t^{\alpha_1}, \ldots, t^{\alpha_n})}{G(t^{\alpha_1}, \ldots, t^{\alpha_n})}, $$

where $G(v_1, \ldots, v_n) = \det[\langle v_i, v_j \rangle]$ is the Gram determinant.

Step 2. We can expand the determinant in the numerator along the first row and compute the coefficients using Cauchy determinants. After a bit of computations, it turns out that

\begin{align*} f_n(x) = x^{\alpha} - \sum_{k=1}^{n} \frac{\psi_{n,k}(\alpha_k)}{\psi_{n,k}(\alpha)} x^{\alpha_k}, \end{align*}

where $\phi_{n,k}(\alpha)$ is the rational function in $\alpha$ given by

$$ \psi_{n,k}(\alpha) := (\alpha_k - \alpha) \Psi_n(\alpha) \qquad\text{and}\qquad \Psi_n(\alpha) := \frac{\prod_{j=1}^{n} (\alpha_j + \alpha + 1)}{\prod_{j=1}^{n} (\alpha_j - \alpha)}. $$

A similar computation also shows that

$$ \| f_n \|^2 = \frac{\prod_{j=1}^{n} (\alpha - \alpha_j)^2}{(2\alpha+1)\prod_{j=1}^{n} (\alpha + \alpha_j + 1)^2} = \frac{1}{(2\alpha+1)\Phi_n(\alpha)^2}. $$

Step 3. Now let us specialize to the case where $(\alpha_k)$ is of the form $\alpha_k = k^2 + p$. Then Euler's reflection formula for the gamma function and Stirling's approximation show that

\begin{align*} \prod_{j=1}^{n} (j^2 - q) = \frac{(n-\sqrt{q})!(n+\sqrt{q})!}{(-\sqrt{q})!\sqrt{q}!} = (n!)^2 \operatorname{sinc} (\pi \sqrt{q}) \prod_{j=n+1}^{\infty} \frac{j^2}{j^2 - q}, \end{align*}

where $\operatorname{sinc}(x) = \frac{\sin x}{x}$ is the (unnormalized) sinc function and $s! = \Gamma(s+1)$. Plugging this to $\Phi_n(\alpha)$, we get

\begin{align*} \Psi_{n}(\alpha) &= \frac{\operatorname{sinhc}(\pi\sqrt{p+\alpha+1})}{\operatorname{sinc}(\pi\sqrt{\alpha - p})} \prod_{j=n+1}^{\infty} \frac{j^2 + p - \alpha}{j^2 + p + \alpha + 1}. \end{align*}

and

\begin{align*} \psi_{n,k}(\alpha_k) &= \lim_{s \to k} \left( k^2 - s^2 \right) \Psi_{n}(s^2 + p) \\ &= (-1)^{k-1} 2k^2 \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}) \prod_{j=n+1}^{\infty} \frac{j^2 - k^2}{j^2 + k^2 + 2p + 1}. \end{align*}

Note that both $\Psi_n(\alpha)$ and $\psi_{n,k}(\alpha_k)$ converges as $n \to \infty$:

\begin{gather*} \Psi_{\infty}(\alpha) := \lim_{n\to\infty} \Psi_{n}(\alpha) = \frac{\operatorname{sinhc}(\pi\sqrt{p+\alpha+1})}{\operatorname{sinhc}(\pi\sqrt{p-\alpha})}, \\[0.5em] \lim_{n\to\infty} \psi_{n,k}(\alpha_k) = (-1)^{k-1} 2k^2 \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}). \end{gather*}

Moreover, it is clear from the formula above that

$$ |\psi_{n,k}(\alpha_k)| \leq \frac{2k^2}{\pi\sqrt{k^2 + 2p + 1}} e^{\pi\sqrt{k^2 + 2p + 1}}$$

for all $1 \leq k \leq n$. Therefore, by the dominated convergence theorem, for each $x \in [0, 1)$,

\begin{align*} f(x) &:= \lim_{n\to\infty} f_n(x) \\ &= x^{\alpha} - \lim_{n\to\infty} \frac{1}{\Psi_n(\alpha)} \sum_{k=1}^{n} \frac{\psi_{n,k}(\alpha_k)}{k^2 + p - \alpha} x^{k^2 + p} \\ &= \bbox[color:navy;border:1px dotted navy;padding:3px]{x^{\alpha} + \frac{1}{\Psi_{\infty}(\alpha)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2}{k^2 + p - \alpha} \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}) x^{k^2+p}.} \end{align*}

Of course, this $f(x)$ must coincide with the $L^2$-limit of $f_n$. Therefore, for each $n = 1, 2, \ldots,$

$$ \int_{0}^{1} f(x)x^{n^2+p} \, \mathrm{d}x = \lim_{N \to \infty} \int_{0}^{1} f_N(x)x^{n^2+p} \, \mathrm{d}x = 0. $$

Below is the graph of $f(x)$ for $x \in [0, 1)$ when $\alpha = 0$ and $p = 1$.

graph of f when a = 0 and p = 1

Step 4. The function $f$ is almost good, but the issue is that $f$ seems suffering from the discontinuity at $x = 1$. To make amend of this, we now fix $\alpha = 0$, so that the corresponding $f$ is given by

$$ f(x) = 1 + \frac{1}{\Psi_{\infty}(0)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2}{k^2 + p} \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}}) x^{k^2+p}. $$

Then

$$ \int_{0}^{1} f(x) \, \mathrm{d}x = \langle f, 1 \rangle = \| f \|^2 + \underbrace{\langle f, 1-f \rangle}_{=0} = \frac{1}{\Psi_{\infty}(0)^2}. $$

Using this, we define $F : [0, 1] \to \mathbb{R}$ as

\begin{align*} F(x) &= \int_{x}^{1} f(t) \, \mathrm{d}t \\ &= \int_{0}^{1} f(x) \, \mathrm{d}x - \int_{0}^{x} f(x) \, \mathrm{d}x \\ &= \bbox[color:navy;border:1px dotted navy;padding:3px]{\frac{1}{\Psi_{\infty}(0)^2} - x - \frac{1}{\Psi_{\infty}(0)} \sum_{k=1}^{\infty} (-1)^{k} \frac{2k^2 \operatorname{sinhc}(\pi\sqrt{\smash[b]{k^2 + 2p + 1}})}{(k^2 + p)(k^2 + p + 1)} x^{k^2+p + 1}.} \end{align*}

Then $F$ is absolutely continuous on all of $[0, 1]$. Also, performing integration by parts,

\begin{align*} 0 = \int_{0}^{1} f(x)x^{n^2+p} \, \mathrm{d}x &= [ -F(x)x^{n^2+p} ]_{0}^{1} + (n^2 + p - 1)\int_{0}^{1} F(x) x^{n^2+p-1} \, \mathrm{d}x \end{align*}

and hence

$$ \int_{0}^{1} F(x) x^{n^2+p-1} \, \mathrm{d}x = 0, \qquad n = 1, 2, \ldots $$

So by choosing $p = 1$, the main claim follows.

Related Question