Firstly, you should always use your intuition. If you find that your intuition was correct, then smile. If you find that your intuition was wrong, use the experience to fine-tune your intuition.
I hope I'm interpreting you question correctly - here goes. Since you are not interested in any of the proofs, I'll just focus on intuition. Now, let's consider a series of the from $\sum _n \frac{1}{n^p}$, with $p>0$ a parameter. Intuitively, the convergence or divergence of the series depends on how fast the general term $\frac{1}{n^p}$ tends to $0$. This is so because the sum is that of infinitely many positive quantities. If these quantities converge to $0$ too slow, the number of summands in each partial sum will be more dominant than the magnitude of the summands. However, if the quantities converge to $0$ fast enough, then in each partial sum the magnitude of the summands will be dominated by numbers of small magnitude, and thus outweigh the fact that there are lots of summands.
So, the question is how fast does $\frac{1}{n^p}$ converge to $0$. Let's look at some extreme values of $p$. If $p$ is very large, say $p=1000$, then $\frac{1}{n^p}$ becomes very small very fast (experiment with computing just a few values to see that). So, when $p$ is large, it seems the general term converges to $0$ very fast, and thus we'd expect the series to converge. However, if the value of $p$ is very small, say $p=\frac{1}{1000}$, then $\frac{1}{n^p}$ is actually pretty large for the first few possibilities for $n$, and while it does monotonically tend to $0$, it does so very slowly. So, we'd expect the series to diverge when $p$ is small.
Now, if $0<p<q$ then $\frac{1}{n^q}<\frac{1}{n^p}$, so the bigger the parameter the faster the convergence of the general term to $0$ gets. So, small values for the parameter imply diverge of the series, while large values of the parameter imply convergence of the series. So, somewhere in the middle there has to be a value $b$ for the parameter such that if $p<b$ then the series diverges, while if $p>b$ then the series converges.
So, just by this straightforward analysis of the behaviour with respect to varying the parameter $p$, we know (intuitively) that there must be some cut-off value for $p$ that is the gateway between convergence and divergence. What happens at the that gateway value for $p$ is unclear, and there is no compelling reason to suspect one behaviour of the series over another. Now, the particular whereabouts of that special gateway value for $p$ should depend strongly on the particularities of the general term. This is thus where you'll have to delve into more rigorous proofs.
I hope this rather lengthy answer addresses what you were wondering about. Basically, it says that a cutoff parameter must exist, but we can't expect to say anything about its whereabouts nor the behaviour at that cutoff value without careful study of the general terms.
If a power series centered at $a$ converges absolutely at a point $w$, then it also converges at all points $|z-a|<|w-a|$.
If a power series converges absolutely then it converges to a holomorphic function $g$.
Finally, let $s$ be a singular point. The function $f$ and the function $g$ to which the power series converges coincide in the disc $|z-a|<|s-a|$. Therefore $g$ is an holomorphic extension of $f$ to a neighborhood of $s$. This contradics the definition of singular point.
Best Answer
You can read $e^x=\sum_{n=0}^\infty \frac{x^n}{n!}$ as saying that you can compute the exponential of anything by raising it to powers, dividing by $n!$, and summing. Whether the argument of the exponential comes to you as $x$ or as some function $f(x)$, it does not matter.
One subtlety that you may not have noticed is that the series that you get might not be a Taylor series. When you replace $x=t^2/2$, you actually do get the Taylor series of $e^{t^2/2}$ at $t=0$. (This is not completely trivial, there is some "uniqueness of power series" going on here.) On the other hand, if you replace, say, $x=\sin(t)$ then it is not a Taylor series anymore.