Let me restrict attention to the linear case. Then the space of solutions is a vector space, and one can ask what its dimension is. The answer, for an $n^{th}$ order homogeneous linear ODE (with constant coefficients, to be completely precise), is that it is always $n$-dimensional. This means you can find a basis of it consisting of $n$ linearly independent solutions, but there are in general many such bases. (And there are many more than $n$ solutions; if $n$ is positive there are infinitely many solutions.)
This is a consequence of the existence and uniqueness theorems for ODEs, which say that
- Every tuple of initial conditions $\bigl(y(0), y'(0), y''(0), \dots y^{(n-1)}(0)\bigr)$ corresponds to a solution (existence), and
- A solution $y$ is completely determined by its initial conditions $y(0), y'(0), y''(0), \dots y^{(n-1)}(0)$ (uniqueness).
So the reason the space of solutions is $n$-dimensional is that the space of initial conditions is $n$-dimensional.
The question of what these solutions actually look like requires a more detailed analysis and that's a bit of a separate question.
Consider a general $n$-th order linear homogeneous ODE:
$$\sum_{i=0}^na_i(t)y^{(i)}(t)=0\tag{1}$$
where $a_n(t)\neq 0$ on a certain interval $J$ around a point $t_0$.
First, we should emphasize that solutions of the ODE $(1)$ are only defined on some small intervals. So what we want to prove is:
If $t_0$ is a point on which all $a_i$ are defined and $C^1$ on some interval around $t_0$, and $a_n(t_0)\neq 0$, then there exists an interval $I$ around $t_0$ such that the solution set of $(1)$ has dimension $n$.
Conversely, if $K$ is any interval on which $a_n\neq 0$, then there exist $n$ linearly independent solutions $y_1,\ldots,y_n$ of $(1)$, then these solutions generate the whole solution set of $(1)$ on $K$.
The easiest way is to use the Picard-Lindelof Theorem, which works in $\mathbb{R}^n$. Here is a restricted version of the general statement:
Theorem [Picard-Lindelof]: Consider a Cauchy problem
$$y'(t)=F(t,y(t)),\qquad y(t_0)=(f_1,\ldots,f_n)\tag{2}$$
where $F=F(t,x_1,\ldots,x_n)$ is a function, $C^1$ from some neighbourhood of $(t_0,f_1,\ldots,f_n)$ to $\mathbb{R}^n$. Then
- There exists an interval $I$ around $t_0$ and a fucntion $y=y(t)$ which is a solution of $(2)$. (This is the existence of solutions.)
- Any two solutions $y_1,y_2$ of $(2)$, defined on intervals $I$ and $K$, coincide on $I\cap K$. (This is the "uniqueness" of solutions.)
Solving the ODE $(1)$ is equivalent to solving the following ODE on $n$ dimensions: Again, recall that we assume $a(t_0)\neq 0$ on some interval $J$ around $t_0$. Let $z(t)=(z_0(t),\ldots,z_{n-1}(t))$. Then consider the problem
$$z'(t)=F(t,z(t))\tag{3}$$
where $F(t,x_0,\ldots,x_{n-1})=\left(x_1,\ldots,x_{n-1},-\sum_{i=0}^{n-1}\frac{a_i(t)}{a_n(t)}x_i\right)$.
The equation $(3)$ means that
$$z_0'=z_1,\qquad z_1'=z_2,\ldots\qquad\text{or more generally }z_i=z_0^{(i)},$$
and $z_{n-1}'=-\sum_{i=0}^{n-1}\frac{a_i(t)}{a_n(t)}z_i$.
Therefore, $z$ is a solution of $(3)$ iff $y:=z_0$ is a solution of $(1)$. Moreover, the map $z\mapsto z_0=y$ is a linear bijection from the solution space of $(3)$ to the solution space of $(1)$, so they have the same dimensions.
The function $F$ is defined and $C^1$ on $J\times\mathbb{R}^n$. Take a basis $e_1,\ldots,e_n$ of $\mathbb{R}^n$ and solutions $z^1,\ldots,z^n$ of $(3)$, defined on some interval $I$ around $t_0$, and satisfying $z^i(t_0)=e_i$. These $z^i$ are clearly linearly independent.
If $z$ is any other solution of $(3)$ on $I$, write $z(t_0)=\sum\lambda_i e_i$. By the uniqueness part of Picard-Lindelof, $z=\sum\lambda_i z^i$ on $I$.
Best Answer
second order linear differential equation needs two linearly independent solutions so that it has a solution for any initial condition, say, $y(0) = a, y'(0) = b$ for arbitrary $a, b.$ from a mechanical point of view the position and the velocity can be prescribed independently.