Am I correct in believing Frobenius' method is simply a general case of a power series solution? For instance, if the Differential Equation has a regular singular point, would you be forced to use Frobenius' method instead of a regular power series?
[Math] When to use Frobenius method
ordinary differential equationssequences-and-series
Related Solutions
First you make the substitution $x\to\frac{1}{z}$, transform the ODE to the one in the variable $z$, and get a solution $y(z)$ which contains some $\ln(z)$. Then you just substitute back $z\to\frac{1}{x}$ and get terms with $\ln\frac{1}{x}$.
Is there some connection [...] that produces the Frobenius method by analogy with non-singular equations?
When solving a differential equation by power series, you assume that your solution can be expressed as a power series (which is quite reasonable assumption, since many analytic functions can be expressed that way). You can substitute the power series into your equation and find the expression for the coefficients.
This works quite well when your variable coefficients of the derivatives are "nice" functions. But when they have some singularities, there's a problem with that approach: when you substitute your power series and simplify, you'll come to something in the lines of
$$(\mbox{some expression with n})\cdot a_n = 0$$
and you find out that you cannot make the expression in the parenthesis to be $0$ for any value of $n$, which implies that it is $a_n$ which must be $0$. But this gives us just a finite power series (that is, a polynomial); or, what's worse, you have all coefficients equal to zero, but this is just the trivial solution $y = 0$. It is correct, but not much of our interest.
The reason why we cannot get some nice power series in this case is that our assumption was wrong: what mathematics is trying to tell us here is that the solution function cannot be expressed by simple power series when it has singularities, because then you have some negative powers of $x$ involved (which means division by $0$ when $x=0$), or even fractional powers (roots, which don't have values for some numbers) or real or complex powers.
Enter the Frobenius method.
This can be easily fixed by assuming something more: that our power series is multiplied by $x^r$, where $r$ can be any real or complex exponent. This part of the solution accounts for the non-natural powers of $x$ in our solution. So we have the ansatz (trial solution):
$$y(x) = x^r \sum_{n=0}^\infty a_n x^n$$
which we can rewrite simpler as:
$$y(x) = \sum_{n=0}^\infty a_n x^{r+n}$$
When you substitute this trial function into your equation you'd get the indicial equation from which you can determine the value(s) of your exponent $r$.
Edit: I added answers for your follow-up questions to clarify it somewhat more.
Why should multiplying by this weird power help?
As I've said, some functions are not "nice". They have singularities (e.g. divisions by zero, discontinuities etc.). Then, they cannot be expressed as simple power series such as
$$y = \sum_{n=0}^\infty a_n x^n$$
because they don't have natural exponents in their powers. Their powers can be negative (division), fractional (root extraction), or even some real or complex exponents. So we need to modify our power series to account for such "weird" exponents. We do that by adding (or subtracting) some other constant $r$ to (from) our exponent $n$:
$$y = \sum_{n=0}^\infty a_n x^{n+r}$$
Although $n$ could be only some natural number, $r$ can now be any number (even a complex number), and we would have to determine what number it is (sometimes there will be more than one answer for that question). That's what indicial equation is for.
But before that, we shall notice something: After adding/subtracting $r$ to/from the exponent, this is no longer a standard power series (its exponent can now be non-natural). But we can use the rules for exponents to extract this $r$th power of $x$
$$y = \sum_{n=0}^\infty a_n x^n x^r$$
and then move out of the sum since this is a common factor of all its terms:
$$y = x^r \sum_{n=0}^\infty a_n x^n$$
and we have our original power series back, but we note that it is now multiplied by some power of $x$. And this is the answer to your question why do we multiply the trial series by $x^r$.
If a power series should occur, is there any relationship with the series solutions to constant-coefficient equations?
Yes there is. For constant-coefficient equations the solution was in a form:
$$ y = C_0 e^{r_1 x} + C_1 e^{r_2 x}$$
where $r_x$ and $r_y$ were roots to the "characteristic equation" (or "indicial equation"). You could get the same answer with simple power series, since constant coefficients are "nice" functions (analytic) too. You'd get a pattern in the coefficients of the power series, with factorials in there, which indicates the exponential function, and this is indeed the solution. If you have some pattern-matching skill, you can try restoring the original function (exponential in this case) from the power series. Been there, done that, this is perfectly possible.
In the case of variable-coefficient equation such as Cauchy-Euler (or equidimensional) equation, we are told that the solution is in a form:
$$y = C_0 x^{r_1} + C_2 x^{r_2}$$
which doesn't seem like the exponentials we are familiar with from the constant-coefficient equations. But note that you can rewrite $x$ as $e^{\ln x}$, and when you substitute it to the solution pattern you'll get:
$$y = C_0 (e^{\ln x})^{r_1} + C_2 (e^{\ln x})^{r_2}$$
and using the laws of exponents we see that:
$$y = C_0 e^{r_1 \ln x} + C_2 e^{r_2 \ln x}$$
and this form might look more familiar, since there is a sum of exponentials again. But now we use natural logarithms of $x$ instead of $x$ directly.
The ansatz approach is certainly motivational, but it's not intuitive and gets less satisfying the more you learn.
Well, intuition is also something you learn. For you this might not be intuitive, but when you gain experience and learn some patterns, you'd start to notice them in your equations when they appear again and again, and they'll become intuitive to you then.
These questions can't be answered by ansatz, which students always interpret as "guess and check", which it sort of is.
It's not much "guessing", it's rather an "educated guess", or "intelligent heuristics". You don't guess just anything ─ you use what you know for your guess. For example, when you have simple equation like this:
$$y' = k y$$
you can think in these lines:
"What function $y$ is proportional to its own derivative?" There is only one such function: the exponential, $y = e^x$. But we want a function which derivative will be proportional to that function in a certain ratio, a certain constant of proportionality, which is $k$. If you know that $\left(e^{kx}\right)' = k e^{kx}$, then you can guess the correct solution right away. But if you don't, you can leave some constant "to be determined later" in your trial solution, and put it into the equation to see what will happen and what value that constant shall have for the equation to be satisfied.
It happens that the exponential function solves any linear differential equation, and this is not a coincidence: The exponential is the eigenfunction (or proper function) of differentiation. This means that when you apply differentiation to it, it will come out untouched, but sometimes multiplied by a constant (which is then an eigenvalue; in the case above it was $k$). So this is not much a "guess" when you try to use exponential for other similar equations to see if it still fits. If from some reason it won't fit, you'll loose nothing: Mathematics will tell you that your assumption was wrong, and sometimes you can notice what needs to be improved to get the correct answer. For example, you need some power of $e^x$ (which is $e^{kx}$ as in our example above). Or you need to multiply by $x$, or some power of it (as $x^r$ in our example with Frobenius series).
Same goes with variable-coefficient equations, like Cauchy-Euler equation. You can see that there's a power of $x$ at every derivative, and this power of $x$ has the same degree as the order of the derivative. Recall that when we differentiate some power of $x$, its exponent drops down by 1 and it goes as a constant multiplier. But then there's that variable coefficient which is also a power of $x$, so multiplying our derivative by it will raise the power back. And since the degree of that power is the same as the order of the derivative, whatever it went down in differentiation, it will go up back again to the same level. So we will have the same powers of $x$ everywhere (a common factor!), just multiplied by some constants, which we can always tweak in a way to turn the whole equation to 0. So substituting $x^n$ into the equation as your trial function isn't that bad idea in this case. And, as you can see, you can deduce what function will work from the form of the equation, by observing what these derivatives will do with it, and how can you make the whole equation cancel.
Best Answer
In fact Frobenius method is just an extension from the power series method that you add an additional power that may not be an integer to each term in a power series or even add the log term for the assumptions of the solution form of the linear ODEs so that you can find all groups of the linearly independent solutions that in cases of cannot find all groups of the linearly independent solutions when using power series method.
You can force to use Frobenius method when you find that the linear ODEs can already find all groups of the linearly independent solutions when using power series method, however, you will find that you can already find all groups of the linearly independent solutions when the additional power is just taking an non-negative integer and no need to add the log term.
If you doubt that whether the linear ODEs can find all groups of the linearly independent solutions when using power series method or not, you can feel free to use Frobenius method instead. The only thing is you should write more steps.
For example you force to solve $y''+xy=0$ by Frobenius method:
Let $y=\sum\limits_{n=0}^\infty a_nx^{n+r}$ ,
Then $y'=\sum\limits_{n=0}^\infty(n+r)a_nx^{n+r-1}$
$y''=\sum\limits_{n=0}^\infty(n+r)(n+r-1)a_nx^{n+r-2}$
$\therefore\sum\limits_{n=0}^\infty(n+r)(n+r-1)a_nx^{n+r-2}+x\sum\limits_{n=0}^\infty a_nx^{n+r}=0$
$\sum\limits_{n=0}^\infty(n+r)(n+r-1)a_nx^{n+r-2}+\sum\limits_{n=0}^\infty a_nx^{n+r+1}=0$
$\sum\limits_{n=0}^\infty(n+r)(n+r-1)a_nx^{n+r-2}+\sum\limits_{n=3}^\infty a_{n-3}x^{n+r-2}=0$
$r(r-1)a_0x^{r-2}+r(r+1)a_1x^{r-1}+(r+1)(r+2)a_2x^r+\sum\limits_{n=3}^\infty((n+r)(n+r-1)a_n+a_{n-3})x^{n+r-2}=0$
$\therefore r=1,0,-1,-2$
When we take $r=0$ ,
$2a_2+\sum\limits_{n=3}^\infty(n(n-1)a_n+a_{n-3})x^{n-2}=0$
As we can already find all groups of the linearly independent solutions from this relation when we take $r$ as non-negative integer, and when taking $r$ as non-negative integer, the assumptions of the solution form is as same as assuming the solution form as power series. This implies that we can solve $y''+xy=0$ by power series method.