Solutions of $dx/dt = \lambda x$ with initial condition

ordinary differential equations

I'm working through Robinson's Introduction to Ordinary Differential Equations (Section 8.2) and have a few questions.

The section is on finding the solution of the simplest possible linear differential equation,
$$\frac{dx}{dt} = \lambda x$$ with the initial condition $$x(t_0) = x_0.$$

Here's the first issue I encountered. If $x_0 = 0$, the author claims $x(t) = 0$ for all time. Could anyone provide a formal proof of this? I can see that $x(t_0) = x_0 = 0$ leads to $\dot{x}(t_0) = 0$, but why would this imply $x(t) \equiv 0$? I can sort of see why this is visually by drawing a phase diagram like the one in figure 8.1: in the case $x_0 = 0$, the line has zero slope and goes through 0, so $x$ must be the constant zero solution. However, I'd really appreciate a more detailed explanation.

If $x_0 \neq 0$, he goes on to integrate the equation:

$$\int_{x = x_0}^{x(t)} \frac{1}{x}dx = \int_{t=t_0}^t \tau d\tau$$

and arrives at

$$ \frac{|x|}{|x_0| } = e^{\lambda(t-t_0)} \tag{$*$}$$

Next he writes:

To work out what to do about the modulus signs, the easiest thing is
to draw the phase diagram. For the case $\lambda > 0$ this is shown in Figure
8.1, from which we can see that $x(t)$ and $x_0$ have the same sign. It follows that we can remove the modulus signs and multiply up to give $$x = x_0 e^{\lambda(t-t_0)}$$

The figure is reproduced below:

Phase diagram

The way I understand this is: if $\lambda >0$, then from the differential equation, $x$ and $\dot{x}$ always have the same sign. So if $x(t_0) = x_0 > 0$, then $\forall t$, $x(t) > 0 $.
However, if $x(t_0) = x_0 < 0$, then $\dot{x}<0$ and hence we will have $x(t) < 0 $ for all $t$. So $x(t)$ and $x_0$ have the same sign and we can simply remove the modulus signs.

The case $\lambda < 0$ was left to the reader. Here is my reasoning:

  • If $x_0 < 0$, then $\dot{x}(t_0) > 0$, so $x(t)$ is increasing (at least until $x(t) = 0 \iff \dot{x}(t) = 0 $). But from ($*$), $x=0 \iff e^{\lambda(t-t_0)} = 0$, which is not satisfied by any $t\in \mathbb{R}$. So $x$ never reaches $0$. Hence the solution must start negative, be increasing and tend to $0$: $x = -|x_0| e^{\lambda(t-t_0)} = x_0 e^{\lambda(t-t_0)}$

  • If $x_0 > 0$, then $\dot{x}(t_0) < 0$, so $x(t)$ is decreasing (at least until $x(t) = 0 \iff \dot{x}(t) = 0 $). But once again we can't have $x=0$ for any $t$, so the solutions starts positive, is decreasing and tends to $0$: $x = x_0 e^{\lambda(t-t_0)}$

So in every single case ($\lambda > 0$ or $\lambda <0$) we obtain

$$x = x_0 e^{\lambda(t-t_0)}$$

Is my reasoning correct? Is there any simpler way to arrive to the same conclusions?

Best Answer

Your reasoning is correct. However, it is to be noted that this book makes a circular reasoning. Indeed, the method of separation of variables leads to the integral $\int_{x_0}^{x(t)}\frac{\mathrm{d}x}{x}$ on the left-hand side, which is not defined when $x(t)$ and $x_0$ possess different signs, because of the singularity at $x=0$. In consequence, the author tries to verify a property which is already needed/assumed to generate the solution.

Related Question