Functional Equation $f(f(x))=x+f(x)^2$ – Analysis and References

ds.dynamical-systemsfunctional-equationsgenerating-functionsreference-request

I'd like to gather information and references on the following functional equation for power series $$f(f(x))=x+f(x)^2,$$$$f(x)=\sum_{k=1}^\infty c_k x^k$$

(so $c_0=0$ is imposed).

First things that can be established quickly:

  • it has a unique solution in $\mathbb{R}[[x]]$, as the coefficients are recursively determined;
  • its formal inverse is $f^{-1}(x)=-f(-x)$ , as both solve uniquely the same functional equation;
  • since the equation may be rewritten $f(x)=f^{-1}(x)+x^2$, it also follows that $f(x)+f(-x)=x^2$, the even part of $f$ is just $x^2/2$, and $c_2$ is the only non-zero coefficient of even degree;
  • from the recursive formula for the coefficients, they appear to be integer multiples of negative powers of $2$ (see below the recursive formmula). Rmk. It seems (but I did not try to prove it) that $2^{k-1}c_k$ is an integer for all $k$, and that $(-1)^k c_{2k-1} > 0$ for all $k\geq 2$.

Question : how to see in a quick way
that this series has a positive radius
of convergence, and possibly to
compute or to evaluate it?

[updated]
A more reasonable question, after the numeric results and various comments, seems to be, rather:
how to prove that this series does
not
converge.

Note that the radius of convergence has to be finite, otherwise $f$ would be an automorphism of $\mathbb{C}$. Yes, of course I did evaluate the first coefficients and put them in OEIS, getting the sequence of numerators A107700; unfortunately, it has no further information.

Motivation. I want to understand a simple discrete dynamical system on $\mathbb{R}^2$, namely the diffeomorphism $\phi: (x,y)\mapsto (y, x+y^2)$, which has a unique fixed point at the origin. It is not hard to show that the stable manifold and the unstable manifold of $\phi$ are
$$W^s(\phi)=\mathrm{graph}\big( g_{|(-\infty,\ 0]}\big)$$
$$W^u(\phi)=\mathrm{graph}\big( g_{|[0, \ +\infty)}\big)$$

for a certain continuous, strictly increasing function $g:\mathbb{R}\to\mathbb{R}$, that solves the above functional equation. Therefore, knowing that the power series solution has a positive radius of convergence immediately implies that it coincides locally with $g$ (indeed, if $f$ converges we have $f(x)=x+x^2/2+o(x^2)$ at $x=0$ so its graph on $x\le0$ is included in $W^s$, and its graph on $x\ge0$ is included in $W^u$: therefore the whole graph of $f$ would be included in the graph of $g$,implying that $f$ coincides locally with $g$). If this is the case, $g$ is then analytic everywhere, for suitable iterates of $\phi$ give analytic diffeomorphism of any large portion of the graph of $g$ with a small portion close to the origin.

One may also argue the other way, showing directly that $g$ is analytic, which would imply the convergence of $f$. Although it seems feasible, the latter argument would look a bit indirect way, and in that case I'd like to make sure there is no easy direct way of working on the coefficients (of course, it may happen that $g$ is not analytic and $f$ is not convergent).

Details: equating the coefficients in both sided of the equation for $f$ one has, for the 2-Jet
$$c_1^2x+(c_1c_2+c_2c_1^2)x^2 =x + c_1^2x^2,$$
whence $c_1=1$ and $c_2=\frac 1 2;$ and for $n>2$
$$2c_n=\sum_{1\le j\le n-1}c_jc_{n-j}\,-\sum_{1 < r < n \atop \|k\|_1=n}c_rc_{k_1}\dots c_{k_r}.$$

More details: since it may be of interest, let me add the argument to see $W^s(\phi)$ and
$W^u(\phi)$ as graphs.

Since $\phi$ is conjugate to $\phi^{-1}=J\phi J $ by the
linear involution $J:(x,y)\mapsto (-y,-x)$, we have $W^u(\phi):=W^s(\phi^{-1})=J\
W^s(\phi)$, and it suffices to study $\Gamma:=W^s(\phi)$. For any
$(a,b)\in\mathbb{R}^2$ we have $\phi^n(a,b)=(x_n,x_{n+1})$, with $x_0=a$, $x_1=b$,
and $x_{n+1}=x_n^2+x_{n-1}$ for all $n\in\mathbb{N}$. From this it is easy to see
that $x_{2n}$ and $x_{2n+1}$ are both increasing; moreover, $x_{2n}$ is bounded
above iff $x_{2n+1}$ is bounded above, iff $x_{2n}$ converges, iff $x_n\to 0$, iff
$x_n\le 0 $ for all $n\in\mathbb{N}$.

As a consequence $(a,b)\in \Gamma$ iff $\phi^n(a,b)\in
Q:=(-\infty,0]\times(-\infty,0]$, whence $ \Gamma=\cap_{ n\in\mathbb{N}}
\phi^{-n}(Q)$. The latter is a nested intersection of connected unbounded closed
sets containing the origin, therefore such is $\Gamma$ too.

In particular, for any $a\leq 0$ there exists at least a $b\leq 0$ such that
$(a,b)\in \Gamma$: to prove that $b$ is unique, that is, that $\Gamma$ is a graph
over $(\infty,0]$, the argument is as follows. Consider the function
$V:\Gamma\times\Gamma\to\mathbb{R}$ such that $V(p,p'):=(a-a')(b-b')$ for all
$p:=(a,b)$ and $p':=(a',b')$ in $\Gamma$.

Showing that $\Gamma$ is the graph of a
strictly increasing function is equivalent to show that $V(p,p')>0$ for all pair of
distinct points $p\neq p'$ in $\Gamma$.

By direct computation we have
$V\big(\phi(p),\phi(p')\big)\leq V(p,p')$ and $\big(\phi(p)-\phi(p')\big)^2\geq
\|p-p'\|^2+2V(p,p')(b+b')$. Now, if a pair $(p,p')\in\Gamma\times\Gamma$ has
$V(p,p')\le0$, then also by induction $V\big(\phi^n(p),\phi^n(p')\big)\leq 0$ and
$\big(\phi^n(p)-\phi^n(p')\big)^2\geq \|p-p'\|^2$ for all $n$, so $p=p'$ since both
$\phi^n(p)$ and $\phi^n(p')$ tend to $0$. This proves that $\Gamma$ is a graph of a
strictly increasing function $g:\mathbb{R}\to\mathbb{R}$: since it is connected, $g$
is also continuous. Of course the fact that $\Gamma$ is $\phi$-invariant implies that $g$ solves the functional equation. 

Best Answer

Having thought about this question some more, including making some plots of the trajectories of points under iterated applications of $f$ (see Gottfried's circular trajectories), it is possible to devise a numerical test which should show that the series expansion diverges (if indeed it does). Whether it is a practical test or not depends on how badly it fails. My initial rough attempt wasn't conclusive, so I haven't determined the answer to this yet.

This answer still needs working through the details, but I'll post what I have so far. Also, I think that much of what I have to say is already well known. As the thread is now community wiki and anyone can edit this post, feel free to add any references or further details.

The main ideas are as follows, and should apply much more generally to analytic functions defined near a fixed point via an iterative formula, such as $f(f(z))=\exp(z)-1$ in this MO question.

  1. There are two overlapping open regions bounding the origin from the right and left respectively, and whose union is a neighbourhood of the origin (with the origin removed). The functional equation $f(f(z))=z+f(z)^2$ with $f(z)=z+O(z^2)$ can be uniquely solved in each of these regions, on which it is an analytic function.
  2. The solution $f$ on each of the regions has the given power series as an asymptotic expansion at zero. Furthermore, it is possible to explicitly calculate a bound for the error terms in the (truncated) expansion.
  3. The functional equation has a solution in a neighbourhood of the origin (equivalently, the power series has a positive radius of convergence) if and only if the solutions on the two regions agree on their overlap.

One way to verify that $f$ cannot be extended to an analytic function in a neighbourhood of the origin would be to accurately evaluate the solutions on the two domains mentioned above at some point in their overlap, and see if they differ. Another way, which could be more practical, is to use the observation that after the second order term, the only nonzero coefficients in our series expansion are for the odd terms, and the signs are alternating [Edit: this has not been shown to be true though and, in any case, I give a proof that this implies zero radius of convergence below]. Consequently, if we evaluate it at a point on the imaginary axis, truncating after a finite number of terms, we still get a lower bound for $\vert f\vert$. If it does indeed diverge, then this will eventually exceed the upper bound we can calculate as mentioned above, proving divergence. Looking at the first 34 terms from OEIS A107700 was not conclusive though.

Choose a point $z_0$ close to (and just to the right of) the origin. Using the power series to low order, we approximate $z_1=f(z_0)$. Then, the functional equation can be used to calculate $z_n=f(z_{n-1})=z_{n-2}+z_{n-1}^2$. Similarly, choosing points just to the left of the origin, we can calculate good approximations for the iterates of $f^{-1}$. Doing this for a selection of initial points gives a plot as follows.

Trajectories of f

Concentrating on a small region about the origin, the iterates of $f$ give clearly defined trajectories - the plot includes a region of radius 0.26 about the origin (much larger, and the paths do start to go wild). As can be seen, those paths leaving the origin do one of two things. Either they move to the right, curving up or down, until they exit the region. Or, they bend round in a circle and re-enter the origin from the left. The iterates of $f^{-1}$ leaving the origin from the left behave similarly, but reflected about the imaginary axis.

This should not be too surprising, and is behaviour displayed by any analytic function of the form $f(z)=z+\lambda z^2+O(z^3)$ where $\lambda > 0$. Consider approximating to second order by the Mobius function $f(z)\approx g(z)\equiv z/(1-\lambda z)$. Then, $g$ preserves circles centered on the imaginary axis and passing through the origin, and will move points on these circles in a counterclockwise direction above the origin and clockwise below the origin. A second order approximation to $g$ should behave similarly. In our case, we have $\lambda=1/2$ and $g$ actually agrees with $f$ to third order, so it is not surprising that we get such accurate looking circles (performing a similar exercise with $f(z)=\exp(z)-1$ gave rather more lopsided looking 'circles').

One thing to note from the plot above: the circles of diameter 0.25 above and below the origin are still very well-defined. So, if $f$ does define an analytic function, then its radius of convergence appears to be at least 0.25, and $f(\pm0.25i)$ is not much larger than 0.25 in magnitude. I wonder if summing a few hundred terms of the power series (as computed by Gottfried) will give a larger number? If it does, then this numerical evidence would point at $f$ not being analytic, and a more precise calculation should make this rigorous.

To understand the trajectories, it is perhaps easiest to consider the coordinate transform $z\mapsto -1/z$. In fact, setting $u(z)=-1/z$, then the Mobius transform above satisfies $g(u(z))=u(z+\lambda)$. More generally, we can calculate the trajectories exiting and entering the origin for a function $f(z)=z+\lambda z^2+O(z^3)$ as follows $$ \begin{align} &u_1(z)=\lim_{n\to\infty}f^{n}(-1/(z-n\lambda)),\\\\ &u_2(z)=\lim_{n\to\infty}f^{-n}(-1/(z+n\lambda)). \end{align}\qquad\qquad{\rm(1)} $$ Then, $u_1$ and $u_2$ map lines parallel to the real axis onto the trajectories of $f$ and $f^{-1}$ respectively and, after reading this answer, I gather are inverses of Abel functions. We can do a similar thing for our function, using the iterative formula in place of $f^{n}$. Then, we can define $f_i$ according to $f_i(z)=u_i(\lambda+u_i^{-1}(z))$, which will be well-defined analytic functions on the trajectories of $f$ (resp. $f^{-1}$) before they go too far from the origin (after which $u_i$ might not be one-to-one). Then $f_i$ will automatically satisfy the functional equation, and will give an analytic function in a neighbourhood of the origin if they agree on the intersection of their domain (consisting of the circular trajectories exiting and re-entering the origin).

[Maybe add: Calculate error bounds for $u_i$ and the asymptotic expansion.]


Update: Calculating trajectories of larger radius than plotted above gives the following.

Intersecting trajectories http://i53.tinypic.com/2wml7x0.gif

The trajectories leaving the origin from the right and entering from the left do not agree with each other, and intersect. This is inconsistent with the existence of a function $f$ in a neighborhood of the origin solving the functional equation, as the two solutions $f_1,f_2$ defined on trajectories respectively leaving and entering the origin do not agree. And, if the solutions $f_1,f_2$ do not agree on the larger trajectories then, by analytic continuation, they cannot agree close to the origin. So, if it can be confirmed that this behaviour is real (and not numerical inaccuracies), then the radius of convergence is zero.


Update 2: It was noted in the original question that, for $n\ge3$, all coefficients $c_n$ in the power series expansion of $f$ are zero for even $n$, and that the odd degree coefficients are alternating in sign, so that $(-1)^kc_{2k-1}\ge0$ for $k\ge2$. This latter observation has not been proven, although Gottfried has calculated at least 128 coefficients (and I believe that this observation still holds true for all these terms). I'll give a proof of the following: if the odd degree coefficients $c_n$ are alternating in sign for $n\ge3$, then the radius of convergence is zero.

To obtain a contradiction, let us suppose a positive radius of convergence $\rho$, and that the odd degree coefficients are alternating in sign after the 3rd term. This would imply that $$ f(it)=-t^2/2 + it(1-t^2/4 - h(t^2))\qquad\qquad{\rm(2)} $$ where $h$ has nonnegative coefficients, so $h\ge0$ for real $t$. Also, $h(t)\to\infty$ as $t\to\rho_-$. For small $t$, $f(it)=it + o(t)$ has positive imaginary part so, by continuity, there will be some $0 < t_0 < \rho$ such that $\Im[f(it_0)]=0$. Choose $t_0$ minimal. If $\rho > 2$ then (2) gives $\Im[f(2i)]\le2(1-2^2/4)=0$ so, in any case, $t_0\le2$. Then, for $0 \le t \le t_0$, the imaginary part of $f(it)$ is bounded by $t(1-t^2/4)$ and (2) gives $$ \begin{align} \vert f(it)\vert^2 &\le t^4/4 + t^2(1-t^2/4)^2\\\\ &=t^2(1-t^2(4-t^2)/16)\\\\ &\le t^2 < \rho^2. \end{align} $$ So, $f(it)$ is within the radius of convergence for $0 \le t \le t_0$. Also, by construction, the functional equation $f(f(it))=it+f(it)^2$ holds for $\vert t\vert$ small. Then, by analytic continuation the functional equation holds on $0\le t \le t_0$ so, $$ \Im[f(f(it_0))]=\Im[it_0+f(it_0)^2]=t_0\not=0. $$ However, $f(it_0)$ and the power series coefficients are all real, so $f(f(it_0))$ is real, giving the contradiction.

Related Question