How can one find solutions to $f”(x)=f(f(x))$

function-and-relation-compositionordinary differential equations

This was a differential equation that I came up with. I have never seen any ODEs which involve composition so I have no idea on how to approach this. One solution appears to be $f(x)=0$ but I can't think of any more.

EDIT: In regards to ODEs, I'm aware of separation of variables, integrating factors, solving linear homogeneous ODEs, method of undetermined coefficients, applying Taylor series, and Fourier and Laplace transforms (although I'm not that experienced with transforms). That being said, I have not formally studied an ODE course.

Best Answer

This answer is meant to , more generally, capture ensembles and techniques to solve iterated functional differential equations. I'll capture the question at hand as well in this process.

I'll present some literature at the end, but I'll also talk briefly about how these arise and how they are solved. This will mostly attempt to fit the context of someone who knows some standard ODE techniques but has perhaps not encountered ODE techniques in full rigour.


A brief introduction

A "retarded" differential equation is best understood as a model with "delay", where the current rate of change should ideally depend upon the current value of the function, but due to natural considerations ends up depending on time points in the past. For example, $x'(t) = x(t-1)$ can be thought of as a "delayed" differential equation in this vein : where you wish that $x'(t) = x(t)$ would be the case, but the system is set up so as to lag a little.

It is possible to rewrite this equation using some coordinate changes, so that it becomes of the form $y'(s) = g(y(s),G(s,y(s)))$ where $g,G$ are appropriate functions. This is still , perhaps, quite solvable.

The problem is when, in some systems, due to a lack of observability, $G$ cannot be found explicitly. Instead, all that is known to us is the $y$-value at $G$, which is something that may not be invertible. It's like not knowing what's inside a box, but knowing its size and trying to investigate it starting from there.

In that sense, what you get is $y'(s) = g(y(s), y(G(s,y(s))))$. That's where you can see iteration coming in. Some of these models appeared in epidemiology and two-body electrostatics, see e.g. [1] and [2]. This prompted the study of some of the most simple models in this field.

More reading can be found at [5]. Since the transformation from $x$ to $y$ typically involves some time reversal (essentially , treating the "backwards" version of a problem), one may read the very interesting [6] as well ; thinking of it as the reversal of a delay differential equation.


Definitions and notation

As per [3], we have the following notation : let $x(t)$ be any function and let $$x^{[0]}(t) = t , x^{[1]}(t) = x(t), x^{[2]}(t) = x(x(t)), x^{[3]}(t) = x(x(x(t))), \ldots$$

denote the iterates of $x$. Similarly, let $$x^{(0)}(t) = x(t), x^{(1)}(t) = x'(t), x^{(2)}(t) = x''(t),\ldots$$

denote the derivatives of $x(t)$. A relation of the form $$ G(t,x'(t),\ldots,x^{(n)}(t), x^{[1]}(t), \ldots, x^{[m]}(t)) = 0 $$

is called an iterative functional differential equation. Typical examples include $x'(t) = x(x(t))$, studied in detail in [4]. You have one such example with you.

Some more IFDEs (this is my abbreviation) may be found at [7].

Note that IFDEs are usually defined over complex domains instead of real domains (i.e. the variable is a complex number instead of a real number, while the derivative is the complex derivative). It suffices to say that doing this makes the theory richer, and allows us to obtain more results around existence and uniqueness.

Furthermore, note that while complex differentiation may be a different kettle of fish compared to real differentiation, we will mostly be dealing with situations where the real and complex derivative match, such as power functions and exponential functions. In case there is a divergence, I will mention it.


Ensembles

Ensembles can be used to handle the most simple IFDEs. Let us quickly see how and why these arise.

The first thing to notice is that the family of power functions is studied not merely because is are mirrored in real-life situations, but also because they is analytically very flexible.

For example, take a power function $x^r$. It's derivatives are also power functions, and its composition with itself , which would be $(x^r)^r = x^{r^2}$ is also a power function! Therefore, ensembles should involve functions that are closed under the process of composition and differentiation.

That's why it's very natural to expect for an equation like $f^{(n)}(x) = f^{[m]}(x)$ , a solution that belongs to some such family.

Power functions

For example, let's think about just $x'(z) = x^{[m]}(z)$. It turns out that a power function $x(t) = \beta t^{\gamma}$ (for suitable $\beta,\gamma$) does the job. To see this, let's plug in this ensemble :$$ \beta \gamma t^{\gamma - 1} = \beta^{\gamma^{m-1} + \ldots +\gamma+ 1}t^{\gamma^m} $$ (Note : check that the RHS is in fact $x^{[m]}(t)$)which means that all you need is $$ \gamma - 1= \gamma^m \\ \gamma = \beta^{\gamma^{m-1} + \ldots +\gamma} $$ which leaves us to solve these coupled equations. Note that $\gamma,\beta$ will come out to be complex quantities : which is why IFDEs must actually be considered over complex domains rather than purely real domains, if one wants to be using ensembles like these.

The same ensemble, which I borrow from [8], is going to answer your question over an appropriate complex domain. We consider the differential equation$$ x^{(n)}(z) = a z^j(x^{[m]}(z))^k \tag{*} $$ where $k,m,n$ are positive integers, $j$ a non-negative integer,$m \geq 2$ and $a \neq 0$ is a complex number. Note that $n=2,a=1,j=0,m=2,k=1$ captures your situation.

For any such differential equation, consider the function $x(z) = \lambda z^{\mu}$. Substitute this into the IFDE above and you can check that $$ \lambda \mu(\mu-1)\ldots(\mu-n+1)z^{\mu-n} = a \lambda^{k(1+\mu+\ldots+\mu^{m-1})}z^{k\mu^m+j} $$ Therefore, the ensemble results in a solution precisely when $$ \lambda \mu(\mu-1)\ldots(\mu-n+1) = a \lambda^{k(1+\mu+\ldots+\mu^{m-1})} \\\mu-n=k\mu^m+j \tag{1} $$ Using standard methods in calculus, the following excellent theorem can be deduced.

Theorem : The IFDE $(*)$ has exactly $m$ solutions of the form $\lambda_iz^{\mu_i}$ on every domain $D$ on the complex numbers that doesn't include the negative real line or $0$. These can be derived by solving $(1)$.

Which solves your question. Note that if you wanted a solution that would be real on the real line, then unfortunately this works only for $m$ odd : where there is precisely one value of $\mu,\lambda$ that is real, which will be a real-valued solution on $(0,\infty)$.

The same ensemble also works for a multitude of equations. It may be used on a generalization of the equation $(*)$ for more iterates and derivatives, and it may also be used for $x'(z) = \frac 1{x^{[m]}(z)}$ (along with some suitable generalizations).

However, beyond the family of powers, the next consideration for a family that is closed under both differentiation and iteration is pretty difficult to find! Indeed, exponentials don't work : for example, $e^x$ composed with itself is like $e^{e^x}$ which is a different level of complexity.

This explains, to some degree, why it is quite difficult to find a closed form solution of an IFDE : the lack of families that can constitute an ensemble. I would urge you to try to locate suitable families that are closed under composition and differentiation : really, there's very little space to operate with.

Eventually, one realizes that beyond power functions and other trivial families, the first non-trivial family that may be considered for a solution ensemble, is the power series. However, before I discuss that, there is one nice trick that helps convert iteration to multiplication.

The iteration-to-multiplication ensemble

If we want to convert iteration to multiplication, we should try to assume functions of a certain form. Suppose that there exists an invertible function $y$ such that $x(t) = y(\mu y^{-1}(t))$ for a suitable function/constant $\mu$. Then , note that $$x(x(t)) = y(\mu y^{-1}(y(\mu y^{-1}(t)))) = y(\mu^2 y^{-1}(t))$$ Consequently, it follows that $x^{[m]}(t) = y(\mu^m y^{-1}(t))$! This is much easier to deal with, because these are roughly of the same complexity as $x(t)$ itself, and all one needs to do is then locate a suitable $y$. There is no point of $y$ being a power function, though : and therefore, we need a suitable family for $y$ to belong to. The best such family is the

Power series

Indeed, $y$ is best located as a power series : once it's convergent and injective, then $x$ is well-defined and can be established as a solution using the ensemble spoken about earlier. The advantage of the ensemble is that on some occasions, at least it's not difficult to show that $y$ exists, even if finding it is a headache.

For example, the most general $x'(t) = x(x(t))$ has a solution of this form. Perhaps more surprising is the following generalization that uses the ensemble , as I will explain in some time:

Consider the equation $x'(z) = \sum_{i=1}^n c_ix^{[i]}(z)$ where $C = \sum_{i=1}^n c_i \neq 0$ and let $0<|\mu|<1$ be a complex number. Then, in a neighbourhood of $\frac{\mu}{C}$, there is a solution $x(z)$ of this equation. It has the form $$ x(z) = \frac{\mu}{C} + \sum_{i=1}^\infty \lambda_c\left(z - \frac{\mu}{C}\right)^i $$ where $\lambda_1,\lambda_2,\lambda_3$ are explicitly known in terms of the $\mu,c_i$ and can be written down, and $\lambda_i,i\geq 4$ are also explicitly known but very long to write down.

Here's a very basic idea of how this works : take the equation $x'(z) = x(x(z))$. Let $x(z) = y(\mu y^{-1}(z))$. Then ,one gets $$ x'(z) = y'(\mu y^{-1}(z)) \frac{\mu}{y'(y^{-1}(z))} $$ using the quotient rule, and $$ x(x(z)) = y(\mu^2y^{-1})(z) $$ Now, if we change variables to let $t = y^{-1}(z)$, then we get $$ y'(\mu t) = \frac{y'(t)}{\mu} y(\mu^2t) $$ which involves NO compositions! Consequently, it is easy to work out what $y$ is like in terms of a Taylor series, or you can see if you can get an ensemble for it (I don't know a situation where there is an ensemble, but in any case it's interesting to know the problem reduces). Once you get $y$, you can get $x$.

The equation that we formed in terms of $y$ given by the iteration-multiplication ensemble is referred to as the auxiliary equation. The auxiliary equation may be used to give solutions to many more first-order IFDEs, but perhaps it is the second-order application that will be helpful.

Second-order IM ensemble usage

Using the IM ensemble, we can look at a second-order differential equation as well, though. I mean, it's obvious : nothing much changes apart from the fact that $y''$ needs to be computed instead of $y'$. Let's take the example of $x''(z) = x(x(z))$. Then we get following the IM ensemble and setting $t = y^{-1}(z)$ that $$ y''(\mu t)\mu^2 - 2 y'(\mu t) y''(t)= y'(t)^2y(\mu^2 t) $$ which , once again, involves no composition and can be solved via Taylor series. This generalizes as :

Consider the equation $$ x''(x^{[r]}(z)) = c_0z + c_1x(z) + \ldots + c_mx^{[m]}(z) $$ where $r,m \geq 0$ and at least one $c_i \neq 0$. Then, under some conditions on $\alpha$ and for all $\mu$, there is a solution to this equation satisfying $x(\mu) = \mu, x'(\mu) = \alpha$.

Some sources where more usages of the IM ensemble may be found are :

  • The first order $x'(z) = f\left(\sum_{s=0}^m c_s x^{[s]}(z)\right)$, solved as in [9].

  • The second-order non-linear equation $x''(z) = (x^{[m]}(z))^2$ , solved in [10].

  • $x''(z) = x(az+bx(z))$, $x''(z) = x(az+bx'(z))$.

Note that some of the equations discussed above are not strictly IFDEs as we've defined them. For example, $x''(x^{[r]}(z))$ doesn't lie in the purview of an IFDE. However, this just illustrates the power of the IM ensemble.


Functional-analytic approach to some IFDEs

This is a fairly advanced section which I am adding for the sake of completeness, but is a primer on how one thinks about this problem from a functional-analytic point of view.

With this, we enter the real-analytic scenario : we now desire solutions that may not be as regular as an analytic function, but will likely admit some amount of differentiability. The idea is the same as for usual differential equations : write an IFDE as a fixed-point equation and prove that a fixed point exists in a certain domain.

Suppose we start with $x'(t) = cx^{[2]}(t)$ with $x(\mu)=\mu$ (so that we can center the solution around $\mu$). Then, we integrate to find $$ x(t) = \mu + \int_\mu^t cx^{[2]}(s)ds $$

Now, consider the operator $$ (Tx)(t) = \mu + \int_\mu^t cx^{[2]}(s)ds $$ on the Banach space $C^{(n)}[\mu-\delta,\mu+\delta]$ with the supremum norm. It is possible to use the Leray-Schauder fixed point theorem here, for example. To provide for a more general class, in [11] it is proved that :

Consider the equation $$ x'(t) = \sum_{i=1}^n c_ix^{[i]}(t) + F(t) $$ then under conditions on $\delta,\mu$ depending only on the $c_i$, and $F \in C^{n-1}([\mu-\delta,\mu+\delta])$ such that $F^{(n-1)}$ is Lipschitz-continuous, there exists a solution to the problem in $C^{(n)}[\mu-\delta,\mu+\delta]$. Under some restrictive conditions on the derivatives and the constants involved, this solution is unique and varies continuously with the underlying parameters.

This is extended with much difficulty to the non-linear $x'(t) = f(x^{[m]}(t))$ under very special conditions on $f$. The proof is essentially the LS fixed-point theorem , and one fits the situation to allow the LS theorem to go through.

A brief note on WHY LS works, though. How exactly do we ride the composition wave? The answer comes when we try to think of the Banach FPT instead : I would urge users here to try and find conditions on $c,\mu,\delta$ and $x^{(1)}{\mu},\ldots,x^{(n)}(\mu)$ that make the mapping $T$ outlined above, a contractive mapping. Thinking about $Tx - Ty$ or $x^{[2]}- y^{[2]}$ and relating these functions to the derivatives of $x,y$ respectively will give an idea of how this might work. The same kind of logic ensures that the more general LS goes through.

Some other methods may be used for solving IFDEs. I consider these somewhat ad-hoc, but can give a brief idea of the monotone method at least. The monotone method relies upon the fact that the underlying fixed-point iteration can be a monotone operator under certain conditions, and then one may begin iterating at a suitable "lower solution" of the equation to climb up to a solution, where it's much easier to prove convergence because of monotonicity and boundedness.

References

[1] Cooke, K. L., Functional-differential equations: Some models and perturbation problems, Differ. Equations dynam. Systems, Proc. Int. Sympos. Puerto Rico 1965, 167-183 (1967). ZBL0189.40301.

[2] Driver, R. D., A two-body problem of classical electrodynamics: the one-dimensional case, Ann. Phys. 21, 122-142 (1963). ZBL0108.40705.

[3] Cheng, Sui Sun, Smooth solutions of iterative functional differential equations, Akça, Haydar (ed.) et al., Proceedings of the international conference: 2004 – Dynamical systems and applications. Papers based on talks given at the conference, Antalya, Turkey, July 5–10, 2004. Dhahran: King Fahd University of Petroleum and Minerals, Department of Mathematical Sciences. 228-252, electronic only (2004). ZBL1353.34075.

[4] Eder, E., The functional differential equation ($x'(t)=x(x(t))$), J. Differ. Equations 54, 390-400 (1984). ZBL0497.34050.

[5] Bélair, Jacques, Population models with state-dependent delays, Mathematical population dynamics, Proc. 2nd Int. Conf., New Brunswick/NJ (USA) 1989, Lect. Notes Pure Appl. Math. 131, 165-176 (1991). ZBL0749.92014.

[6] Can the future imfluence the present?, R.D. Driver

[7] Dunkel, G. M., On nested functional differential equations, SIAM J.Appl. Math. 18, 514-525 (1970). ZBL0212.43603.

[8] Li, Wenrong; Cheng, Sui Sun; Lu, Tzon Tzer, Closed form solutions of iterative functional differential equations, Appl. Math. E-Notes 1, 1-4 (2001). ZBL0980.34061.

[9] Xu, Bing; Zhang, Weinian; Si, Jianguo, Analytic solutions of an iterative functional differential equation which may violate the Diophantine condition, J. Difference Equ. Appl. 10, No. 2, 201-211 (2004). ZBL1057.34067.

[10] Si, Jianguo; Wang, Xinping, Analytic solutions of a second-order iterative functional differential equation, J. Comput. Appl. Math. 126, No. 1-2, 277-285 (2000). ZBL0983.34056.

[11] Si, Jian-Guo; Cheng, Sui Sun, Note on an iterative functional-differential equation, Demonstr. Math. 31, No. 3, 609-614 (1998). ZBL0919.34064.