On the existence of infinitely many linearly independent solutions for a non-linear IVP $y’=f(t,y),~y(t_0)=y_0$.

initial-value-problemsordinary differential equationsreal-analysis

Consider the IVP
$$
\begin{cases}
y'=f(t,y),\\
y(t_0)=y_0
\end{cases}
\label{1}\tag{$\ast$}
$$

Case $1$: $f$ is Lipschitz w.r.t $y$ and continuous w.r.t $t$ in a vertical (infinite) strip $[a,b]\times \Bbb R$ containing the point $(t_0,y_0)$.

Here the existence and uniqueness of the solution on the interval $[a,b]$ is guaranteed by Picards Theorem.

Case $2$: $f$ looses the Lipschitz continuity w.r.t $y$ near the point $(t_0,y_0)$ like $f(t,y)=4y^{3/4},\sqrt y,… $ (take $y_0=0$).

Mostly, I have seen infinitely many linearly independent solutions for such non linear $f$.

Doubts:

i. Can we conclude that if there exists two linearly independent solutions for \eqref{1}, then there will be infinitely many linearly independent solutions.

If so, how to justify the claim?

ii. Any other conditions required to ensure there will be infinitely many linearly independent solutions for \eqref{1}?

Best Answer

An example of which the OP speaks is found here. To be specific : for the IVP $y' = y^{2/3}, y(0)=0$, one has the family of solutions $$ y_{T}(t) = \begin{cases} 0 & 0 \leq t \leq T \\ \frac{(t-t_0)^3}{3^3} & T \leq t \end{cases} $$

for every $T \geq 0$. The same can be verified by differentiation. However, the interest point is the linear independence.

Indeed, we claim for this toy case that there are infinitely many solutions which form a linearly independent set, in that no finite linear combination of them equals $0$. Indeed, suppose that $\sum c_{i} y_{t_i}(t) = 0$ on $[0,\infty)$ for $t_1<t_2<\ldots<t_n$. Then, on the interval $(t_1,t_2]$, each of $y_{t_2},\ldots,y_{t_n}$ equal $0$, so we get that $\sum c_{i} y_{t_i}(t) = 0$ on $(t_1,t_2]$ implies $c_{1}y_{t_1}(t) = 0$ on $(t_1,t_2]$. Therefore, $c_1 = 0$ as $y_{t_1} \neq 0$ on this interval.

However, repeating this argument with the remaining points $t_2,\ldots,t_n$ leads to every $c_i=0$ for $i<n$. The final equation being $\sum c_{n} y_{t_n} = 0$ on $[t_n,\infty)$, which implies that $c_n = 0$. Thus, all the $c_i$ equal to $0$, which shows that the $y_{t_i}$ form a linearly independent set.


We can generalize the example given here to a large class of IVPs. The theme is the same : one can "hold" the solution to be $0$ at the non-Lipschitz point $0$ as much as needed, and then "release" the actual solution after an arbitrary holding time. This technique that I refer to as "hold and release" always produces infinitely many linearly independent solutions because of the argument I specified earlier.

Consider $y' = f(y)$. We will figure out conditions for the above "hold and release" strategy to work. The idea to make our proof easier is to "work backwards" : begin with the solution in mind, then ensure it's regular enough and modeled on the previous examples.

For starters, this equation needs to admit the constant $0$ as a solution, so we take $f(0)=0$ and insist on the initial condition $y(0)=0$.

Then, from $y'(t) = f(y(t))$, loosely speaking we can write $\frac{dy}{dt} = f(y)$ , hence $\frac{dt}{dy} = \frac 1{f(y)}$. Therefore, $t = G(y)$ where $G(x) = \int_0^x \frac{1}{f(u)}du$, hence $y = G^{-1}(t)$.

Thus, we must ensure that $G$ as above is defined and regular enough for $y$ to be differentiable. Accumulating all that information, we may finally provide a catchy name to our elementary findings and write :

Theorem (Existence of infinitely many linearly independent solutions (IMLIs)) : Consider the IVP $y' = f(y), y(0)=0$. Suppose that for some $\tau>0$, it is true that $f(0)=0$, $f>0$ on $(0,\tau)$, and that $\frac 1{f(x)}$ is integrable on $(0,\tau)$. Then, there exist infinitely many linearly independent solutions to this IVP.

Proof : Let $G(x) = \int_{0}^{x} \frac{du}{f(u)}$, which exists by assumption. Note that $G$ is differentiable in $(0,\tau)$ and $G'(x) = \frac 1{f(x)}$, a continuous function. Therefore, $G$ is a continuously differentiable increasing function, hence admits an inverse on $(0,\tau)$, say $G^{-1}(z)$.

However, $G^{-1}(z)$ is also differentiable, with $(G^{-1})'(z) = f(G^{-1}(z))$ by the usual "differentiation of inverse" rule. Furthermore, adapting the proof of this rule and noting that $G(0)=0$ hence $G^{-1}(0)=0$, we get that $G^{-1}$ is right-differentiable at $z=0$ with derivative $f(G^{-1}(0)) = 0$.

Now, for each $T \in [0,\tau)$, consider the following family of solutions $$ y_{T}(t) = \begin{cases} 0 & t \leq T \\ G^{-1}(t-T) & t>T. \end{cases} $$

It is fairly clear from everything established above, that $y_T$ is a solution for $[0,\tau) \setminus \{T\}$. At $T$, both derivatives are equal to $0$ because the right hand derivative of $G^{-1}(z)$ at $z=0$ is $0$. We also have $f(y_T(T)) = 0 = y'(T)$, hence it follows that $y_T$ is in fact a solution for all $T<\tau$.

However, using the same logic as we did for the specific example taken earlier, one can show that this family is linearly independent. Hence, we are done.


There are many possible extensions of IMLIs that I can think of , for which I won't provide the rigorous statement but will instead remark on why the extension may be possible.

  • If the initial condition isn't $y(0)=0$, it may still be possible that IMLIs exist, because the initial condition specified still drives the flow of the differential equation to the problem point , whereupon one may perform hold-and-release. An example of that is seen here, and may be labelled as IMLIs, changed initial condition variation.

  • Like the above, but slightly different : an example that one doesn't always need to use $0$ to hold is seen here. There, the holding point is $1$.

  • This is an example where the equation isn't autonomous and yet we are able to perform hold and release. Essentially, what happens is that $0$ still ends up being a solution to the differential equation, and it ends up admitting another solution because it is variable-separable hence can be solved. Therefore, one can extend IMLIs in the following way : if the IVP admits $0$ as a solution, but also admits another non-constant solution, then one can perform hold and release and get IMLIs for this IVP. That is the content of the result here, which also employs hold-and-release, hence leads to IMLIs as well.

Apart from these, while I am aware of general non-uniqueness criteria, I do not know criteria that would show the existence of IMLIs other than hold-and-release.

Related Question