Suppose that $y_1(t), \ldots, y_n(t)$ are solutions of $\frac{d^n y}{dt} + p_{n-1}(t) \frac{d^{n-1} y}{dt} + \cdots + p_1(t) \frac{dy}{dt} + p_0(t) y = 0$, and suppose that their Wronskian is zero for $t = t_0$, i.e.
\begin{equation*}
\left|
\begin{array}{cccc}
y_1(t_0) & y_2(t_0) & \cdots & y_n(t_0) \\
y_1'(t_0) & y_2'(t_0) & \cdots & y_n'(t_0) \\
\vdots & \vdots & \ddots & \vdots \\
y_1^{(n-1)}(t_0) & y_2^{(n-1)}(t_0) & \cdots & y_n^{(n-1)}(t_0)
\end{array}
\right| = 0.
\end{equation*}
Then the corresponding matrix is not invertible, and the system of equations
\begin{array}{c}
c_1 y_1(t_0) &+& c_2 y_2(t_0) &+& \cdots &+& c_n y_n(t_0) &=& 0 \\
c_1 y_1'(t_0) &+& c_2 y_2'(t_0) &+& \cdots &+& c_n y_n'(t_0) &=& 0 \\
\vdots &+& \vdots &+& \ddots &+& \vdots &=& 0 \\
c_1 y_1^{(n-1)}(t_0) &+& c_2 y_2^{(n-1)}(t_0) &+& \cdots &+& c_n y_n^{(n-1)}(t_0) &=& 0 \\
\end{array}
has a nontrivial solution for $c_1, c_2, \ldots, c_n$ not all zero.
Let $y(t) = c_1 y_1(t) + \cdots + c_n y_n(t)$. Because $y(t)$ is a linear combination of solutions of the differential equation, $y(t)$ is also a solution of the differential equation. Additionally, because the weights satisfy the above system of equations, we have $y(t_0) = y'(t_0) = \cdots = y^{(n-1)}(t_0) = 0$.
These initial conditions and the original differential equation define an initial-value problem, of which $y(t)$ is a solution. If $p_0(t), p_1(t), \ldots, p_{n-1}(t)$ are continuous, then any initial-value problem associated with the differential equation has a unique solution. Obviously $y^*(t) = 0$ is a solution of the initial-value problem; since we know that $y(t)$ is also a solution of the same initial-value problem, it follows that $y(t) = 0$ for all $t$, not just $t = t_0$.
We now have $c_1 y_1(t) + \cdots + c_n y_n(t) = 0$ for all $t$, where $c_1, \ldots, c_n$ are not all zero. Thus the functions $y_1(t), \ldots, y_n(t)$ are linearly dependent.
Conversely, if the functions $y_1(t), \ldots, y_n(t)$ are linearly dependent, then the system of equations
\begin{array}{c}
c_1 y_1(t) &+& c_2 y_2(t) &+& \cdots &+& c_n y_n(t) &=& 0 \\
c_1 y_1'(t) &+& c_2 y_2'(t) &+& \cdots &+& c_n y_n'(t) &=& 0 \\
\vdots &+& \vdots &+& \ddots &+& \vdots &=& 0 \\
c_1 y_1^{(n-1)}(t) &+& c_2 y_2^{(n-1)}(t) &+& \cdots &+& c_n y_n^{(n-1)}(t) &=& 0 \\
\end{array}
has a nontrivial solution for every $t$, the corresponding matrix is not invertible for any $t$, and $W[y_1, \ldots, y_n](t) = 0$.
Edit: After clarifying in chat, it seems there are two questions here regarding finding power series solutions to second order, linear ODE.
- Will the two solutions I get always be linearly independent?
- After I've found two solutions, how can I check whether or not they are linearly independent?
Answer to Question 1:
When using the Method of Frobenius to obtain a power series solution of a second order linear ODE in the neighborhood of a regular singular point, in can happen that you end up finding only one linearly independent power series solution. (You do find two solutions, but the second is a constant multiple of the first.)
This happens when solving Bessel's equation
$$
x^2y''(x)+xy'(x)+(x^2-\mu^2)y(x)=0
$$
about $x=0$. It is a long and tedious calculation, too long to type out, but you can study it here in Example 5.7.2 starting on page 345.
However, you can always fall back on the method you alluded to above, often called reduction of order/variation of constants, to find a second linearly independent solution $y_2$ once you have any one solution $y_1$. That applies the Bessel's equation as well.
Answer to Question 2:
Typically we would compute the Wronskian of $y_1$ and $y_2$, $W(y_1(x),y_2(x))(x)$ and appeal to the fact that $y_1$ and $y_2$ are linearly independent iff the Wronskian in nonzero. When $y_1$ and $y_2$ are power series this looks to be a cumbersome calculation. However, let's appeal to the full power of the Wronskian statement regarding linear independence:
If $W(y_1,y_2)(x_0)\not=0$ for some $x_0\in I$, then $y_1(x)$ and
$y_2(x)$ are linearly independent on $I$.
Case 1: Ordinary points
Now just pick $x_0$ judiciously: say, $x_0$ is the ordinary point about which we found the two series solutions
$$
y_1(x)=\sum_{n=0}^\infty a_n(x-x_0)^n, \quad y_2(x)=\sum_{n=0}^\infty b_n(x-x_0)^n,
$$
Then
$$
W(y_1,y_2)(x_0)=y_1(x_0)y_2'(x_0)-y_2(x_0)y_1'(x_0)=a_0b_1-b_0a_1
$$
tells the tale (whether this is zero or not).
Case 2: (Regular) singular points
This takes quite a bit of typing. I will punt by pointing you here, page 14. If someone has a better "preserved" source than this and/or wants to edit my answer to give the full details in this case, please do.
Best Answer
$\rm Theorem$. If the Wronskian of a set of $n$ functions defined on the interval $[a,b]$ is nonzero for at least one point in this interval, then the set of functions is linearly independent on the interval. If the Wronskian is identically zero on this interval and if each of the functions is a solution to the same linear differential equation, then the set of functions is linearly dependent.
Note that the assumption that the these functions are the solution to a linear differential equation is not necessary for the part of the theorem you wish to invoke.