I am working on some physics problem and got stuck with the following equation: Let $a$ be a very small positive number. Is there a bounded function $F$, $0 \leq F \leq 1$, such that for all $x \in \mathbb{R}$, $$ F(x – a) e^{x a} + (1 – F(x+a)) e^{-x a} = e^{a^2/2}. $$ I had never seen anything like that before. Any references are most welcome.
A functional equation
fa.functional-analysisfunctional-equationsreal-analysisspecial functions
Related Solutions
Having thought about this question some more, including making some plots of the trajectories of points under iterated applications of $f$ (see Gottfried's circular trajectories), it is possible to devise a numerical test which should show that the series expansion diverges (if indeed it does). Whether it is a practical test or not depends on how badly it fails. My initial rough attempt wasn't conclusive, so I haven't determined the answer to this yet.
This answer still needs working through the details, but I'll post what I have so far. Also, I think that much of what I have to say is already well known. As the thread is now community wiki and anyone can edit this post, feel free to add any references or further details.
The main ideas are as follows, and should apply much more generally to analytic functions defined near a fixed point via an iterative formula, such as $f(f(z))=\exp(z)-1$ in this MO question.
- There are two overlapping open regions bounding the origin from the right and left respectively, and whose union is a neighbourhood of the origin (with the origin removed). The functional equation $f(f(z))=z+f(z)^2$ with $f(z)=z+O(z^2)$ can be uniquely solved in each of these regions, on which it is an analytic function.
- The solution $f$ on each of the regions has the given power series as an asymptotic expansion at zero. Furthermore, it is possible to explicitly calculate a bound for the error terms in the (truncated) expansion.
- The functional equation has a solution in a neighbourhood of the origin (equivalently, the power series has a positive radius of convergence) if and only if the solutions on the two regions agree on their overlap.
One way to verify that $f$ cannot be extended to an analytic function in a neighbourhood of the origin would be to accurately evaluate the solutions on the two domains mentioned above at some point in their overlap, and see if they differ. Another way, which could be more practical, is to use the observation that after the second order term, the only nonzero coefficients in our series expansion are for the odd terms, and the signs are alternating [Edit: this has not been shown to be true though and, in any case, I give a proof that this implies zero radius of convergence below]. Consequently, if we evaluate it at a point on the imaginary axis, truncating after a finite number of terms, we still get a lower bound for $\vert f\vert$. If it does indeed diverge, then this will eventually exceed the upper bound we can calculate as mentioned above, proving divergence. Looking at the first 34 terms from OEIS A107700 was not conclusive though.
Choose a point $z_0$ close to (and just to the right of) the origin. Using the power series to low order, we approximate $z_1=f(z_0)$. Then, the functional equation can be used to calculate $z_n=f(z_{n-1})=z_{n-2}+z_{n-1}^2$. Similarly, choosing points just to the left of the origin, we can calculate good approximations for the iterates of $f^{-1}$. Doing this for a selection of initial points gives a plot as follows.
Concentrating on a small region about the origin, the iterates of $f$ give clearly defined trajectories - the plot includes a region of radius 0.26 about the origin (much larger, and the paths do start to go wild). As can be seen, those paths leaving the origin do one of two things. Either they move to the right, curving up or down, until they exit the region. Or, they bend round in a circle and re-enter the origin from the left. The iterates of $f^{-1}$ leaving the origin from the left behave similarly, but reflected about the imaginary axis.
This should not be too surprising, and is behaviour displayed by any analytic function of the form $f(z)=z+\lambda z^2+O(z^3)$ where $\lambda > 0$. Consider approximating to second order by the Mobius function $f(z)\approx g(z)\equiv z/(1-\lambda z)$. Then, $g$ preserves circles centered on the imaginary axis and passing through the origin, and will move points on these circles in a counterclockwise direction above the origin and clockwise below the origin. A second order approximation to $g$ should behave similarly. In our case, we have $\lambda=1/2$ and $g$ actually agrees with $f$ to third order, so it is not surprising that we get such accurate looking circles (performing a similar exercise with $f(z)=\exp(z)-1$ gave rather more lopsided looking 'circles').
One thing to note from the plot above: the circles of diameter 0.25 above and below the origin are still very well-defined. So, if $f$ does define an analytic function, then its radius of convergence appears to be at least 0.25, and $f(\pm0.25i)$ is not much larger than 0.25 in magnitude. I wonder if summing a few hundred terms of the power series (as computed by Gottfried) will give a larger number? If it does, then this numerical evidence would point at $f$ not being analytic, and a more precise calculation should make this rigorous.
To understand the trajectories, it is perhaps easiest to consider the coordinate transform $z\mapsto -1/z$. In fact, setting $u(z)=-1/z$, then the Mobius transform above satisfies $g(u(z))=u(z+\lambda)$. More generally, we can calculate the trajectories exiting and entering the origin for a function $f(z)=z+\lambda z^2+O(z^3)$ as follows $$ \begin{align} &u_1(z)=\lim_{n\to\infty}f^{n}(-1/(z-n\lambda)),\\\\ &u_2(z)=\lim_{n\to\infty}f^{-n}(-1/(z+n\lambda)). \end{align}\qquad\qquad{\rm(1)} $$ Then, $u_1$ and $u_2$ map lines parallel to the real axis onto the trajectories of $f$ and $f^{-1}$ respectively and, after reading this answer, I gather are inverses of Abel functions. We can do a similar thing for our function, using the iterative formula in place of $f^{n}$. Then, we can define $f_i$ according to $f_i(z)=u_i(\lambda+u_i^{-1}(z))$, which will be well-defined analytic functions on the trajectories of $f$ (resp. $f^{-1}$) before they go too far from the origin (after which $u_i$ might not be one-to-one). Then $f_i$ will automatically satisfy the functional equation, and will give an analytic function in a neighbourhood of the origin if they agree on the intersection of their domain (consisting of the circular trajectories exiting and re-entering the origin).
[Maybe add: Calculate error bounds for $u_i$ and the asymptotic expansion.]
Update: Calculating trajectories of larger radius than plotted above gives the following.
Intersecting trajectories http://i53.tinypic.com/2wml7x0.gif
The trajectories leaving the origin from the right and entering from the left do not agree with each other, and intersect. This is inconsistent with the existence of a function $f$ in a neighborhood of the origin solving the functional equation, as the two solutions $f_1,f_2$ defined on trajectories respectively leaving and entering the origin do not agree. And, if the solutions $f_1,f_2$ do not agree on the larger trajectories then, by analytic continuation, they cannot agree close to the origin. So, if it can be confirmed that this behaviour is real (and not numerical inaccuracies), then the radius of convergence is zero.
Update 2: It was noted in the original question that, for $n\ge3$, all coefficients $c_n$ in the power series expansion of $f$ are zero for even $n$, and that the odd degree coefficients are alternating in sign, so that $(-1)^kc_{2k-1}\ge0$ for $k\ge2$. This latter observation has not been proven, although Gottfried has calculated at least 128 coefficients (and I believe that this observation still holds true for all these terms). I'll give a proof of the following: if the odd degree coefficients $c_n$ are alternating in sign for $n\ge3$, then the radius of convergence is zero.
To obtain a contradiction, let us suppose a positive radius of convergence $\rho$, and that the odd degree coefficients are alternating in sign after the 3rd term. This would imply that $$ f(it)=-t^2/2 + it(1-t^2/4 - h(t^2))\qquad\qquad{\rm(2)} $$ where $h$ has nonnegative coefficients, so $h\ge0$ for real $t$. Also, $h(t)\to\infty$ as $t\to\rho_-$. For small $t$, $f(it)=it + o(t)$ has positive imaginary part so, by continuity, there will be some $0 < t_0 < \rho$ such that $\Im[f(it_0)]=0$. Choose $t_0$ minimal. If $\rho > 2$ then (2) gives $\Im[f(2i)]\le2(1-2^2/4)=0$ so, in any case, $t_0\le2$. Then, for $0 \le t \le t_0$, the imaginary part of $f(it)$ is bounded by $t(1-t^2/4)$ and (2) gives $$ \begin{align} \vert f(it)\vert^2 &\le t^4/4 + t^2(1-t^2/4)^2\\\\ &=t^2(1-t^2(4-t^2)/16)\\\\ &\le t^2 < \rho^2. \end{align} $$ So, $f(it)$ is within the radius of convergence for $0 \le t \le t_0$. Also, by construction, the functional equation $f(f(it))=it+f(it)^2$ holds for $\vert t\vert$ small. Then, by analytic continuation the functional equation holds on $0\le t \le t_0$ so, $$ \Im[f(f(it_0))]=\Im[it_0+f(it_0)^2]=t_0\not=0. $$ However, $f(it_0)$ and the power series coefficients are all real, so $f(f(it_0))$ is real, giving the contradiction.
$\def\R{\mathbb R}$$\def\aha{{1/2}}$$\def\maha{{-1/2}}$ Edit. Now it looks correct.
I can prove that $x_t$ grows at most like $t$ for $t\geq 1$, up to a multiplicative constant.
For simplicity, assume that $\|f\|_{L^\infty}\leq 1$. Call $$ 0<\varepsilon:=\min_{x\in[1,2]} f(x). $$ Now write $u$ as a convolution with the heat kernel (I will use the kernel for the equation $u_t=u_{xx}$ and drop the factor $1/2$, as this amounts simply to a linear rescaling of the time variable). $$ u(t,x)=\int_\R f(y)\frac{1}{\sqrt{4\pi t}}e^{-\frac{(x-y)^2}{4t}}dy=\int_\R f(t^\aha z)\frac{1}{\sqrt{4\pi}}e^{-\frac{(t^\maha x-y)^2}{4}}dz. $$ Call $\Phi(x):=\int_{-\infty}^x\frac{1}{4\pi}e^{-z^2/4}dz.$
We split the above integral in two pieces and we estimate both from below: $$ u(t,x)\geq -\int_{-\infty}^0 \frac{1}{\sqrt{4\pi}}e^{-\frac{(t^\maha x-y)^2}{4}} dz + \varepsilon\int_{t^\maha}^{2t^\maha} \frac{1}{\sqrt{4\pi}}e^{-\frac{(t^\maha x-y)^2}{4}} dz=$$ $$ = -\Phi(-t^\maha x) + \varepsilon\left(\Phi(t^\maha(2-x))-\Phi(t^\maha(1-x))\right). $$ Now we use the estimate on $\Phi$ (which can be checked by differentiating both sides) $$ \Phi(z)\leq \frac{1}{\sqrt{4\pi}}\frac{4}{|z|+1}e^{-z^2/4},\quad\forall z\leq 0 $$ and Lagrange formula to show that for $x\geq 0$ and some $g\in[1,2]$ we have $$ u(t,x)\geq \varepsilon\frac{1}{\sqrt{4\pi t}}e^{-\frac{(x-g)^2}{4t}}-\frac{1}{\sqrt{4\pi}}\frac{4}{|x/\sqrt t|+1}e^{-x^2/4t}\geq $$ $$ \geq \varepsilon\frac{1}{\sqrt{4\pi t}}e^{-\frac{(x-1)^2}{4t}}-\frac{1}{\sqrt{4\pi}}\frac{4}{|x/\sqrt t|+1}e^{-x^2/4t}= $$ $$ =\frac{1}{\sqrt{4\pi t}}e^{-\frac{x^2}{4t}}\left(e^{\log(\varepsilon) +\frac{x}{2t}-\frac{1}{4t}}-e^{\log(4)-\log(\frac{x}{t}+\frac{1}{\sqrt t}) }\right).$$ Simplifying, one obtains that for non-negative $x$, $u(t,x)$ is positive whenever $$ \frac{x}{t}>h\left(\frac{1}{2\sqrt t}+\frac{1}{4t}-\log(\varepsilon/4)\right)-t^\maha,$$ where $h=g^{-1}$ is the inverse of the function $g(z)=\log(z)+z/2$. After further estimates on $h$, one can show that $u(t,x)>0$ whenever $t\geq 1$ and $$ x>Ct$$ for some constant $C$ depending on $\varepsilon$ ($C=-\log(\varepsilon/20)$ should work). By symmetry, a similar bound holds from below, which implies that $$ |x_t|\leq Ct $$ for all $t\geq 1$. Combined with the asymptotic formula for small $t$, this should give the desired result.
I am not entirely sure every step of the proof is correct, but in any case this idea should work to establish the bound you want.
Best Answer
Denote $F(x)=G(x)e^{x^2/2}$, this multiple is chosen in order to get an equation of the form $G(x+a)-G(x-a)=$ a given function. Indeed, you have $G(x)\in [0,e^{-x^2/2}]$ and $$G(x-a)e^{x^2/2+a^2/2}+e^{-xa}-G(x+a)e^{x^2/2+a^2/2}=e^{a^2/2}\\ G(x+a)-G(x-a)=e^{-(x+a)^2/2}-e^{-x^2/2}.$$ One such function $G=:G_0$ is straightforward: $$G_0(t)=\sum_{k=0}^\infty (-1)^ke^{-(t-ka)^2/2},$$ then we are given that $G-G_0$ is $2a$-periodic. On the other hand, $G(x)$ goes to 0 when $x$ goes to $-\infty$, and so does $G_0$. Thus $G-G_0$ is a periodic function which goes to 0 when $x$ goes to $-\infty$. The only such function is identical 0. Therefore $G\equiv G_0$, and the question reduces to whether $G_0(x)\in [0,e^{-x^2/2}]$ for all $x$. But $$\lim_{n\to +\infty}G_0(x+2na)=\sum_{k\in \mathbb{Z}}(-1)^k e^{-(x-ka)^2/2}$$ depends on $x$ (it is closely related to Jacobi theta-function) and is not in general equal to 0, so alas, your $F$ does not exist.