This question provides a nice opportunity to collect some facts on moment-generating functions (mgf).
In the answer below, we do the following:
- Show that if the mgf is finite for at least one (strictly) positive value
and one negative value, then all positive moments of $X$ are finite
(including nonintegral moments).
- Prove that the condition in the first item above is equivalent to the
distribution of $X$ having exponentially bounded tails. In other
words, the tails of $X$ fall off at least as fast as those of an
exponential random variable $Z$ (up to a constant).
- Provide a quick note on the characterization of the distribution by its mgf provided it satisfies the condition in item 1.
- Explore some examples and counterexamples to aid our intuition
and, particularly, to show that we should not read undue importance into the lack of
finiteness of the mgf.
This answer is quite long, for which I apologize in advance. If this
would be better placed, e.g., as a blog post or somewhere else,
please feel free to provide such feedback in the comments.
What does the mgf say about the moments?
The mgf of a random variable $X \sim F$ is defined as $m(t) = \mathbb
E e^{tX}$. Note that $m(t)$ always exists since it is the integral
of a nonnegative measurable function. However, if may not be
finite. If it is finite (in the right places), then for all $p >
0$ (not necessarily an integer), the absolute moments $\mathbb E
|X|^p < \infty$ (and, so, also $\mathbb E X^p$ is finite). This is the topic of the next proposition.
Proposition: If there exists $\newcommand{\tn}{t_{n}}\newcommand{\tp}{t_{p}}\tn < 0$ and $\tp > 0$ such that $m(\tn) < \infty$ and $m(\tp) < \infty$, then the moments of all orders of $X$ exist and are finite.
Before diving into a proof, here are two useful lemmas.
Lemma 1: Suppose such $\tn$ and $\tp$ exist. Then for any $t_0 \in [\tn,\tp]$, $m(t_0) < \infty$.
Proof. This follows from convexity of $e^x$ and monotonicity of the integral. For any such $t_0$, there exists $\theta \in [0,1]$ such that $t_0 = \theta \tn + (1-\theta) \tp$. But, then
$$e^{t_0 X} = e^{\theta \tn X + (1-\theta) \tp X} \leq \theta e^{\tn X} + (1-\theta) e^{\tp X} \>.$$
Hence, by monotonicity of the integral, $\mathbb E e^{t_0 X} \leq \theta \mathbb E e^{\tn X} + (1-\theta) \mathbb E e^{\tp X} < \infty$.
So, if the mgf is finite at any two distinct points, it is finite for all values in the interval in between those points.
Lemma 2 (Nesting of $L_p$ spaces): For $0 \leq q \leq p$, if $\mathbb E |X|^p < \infty$, then $\mathbb E |X|^q < \infty$.
Proof: Two approaches are given in this answer and associated comments.
This gives us enough to continue with the proof of the proposition.
Proof of the proposition. If $\tn < 0$ and $\tp > 0$ exist as stated in the proposition, then taking $t_0 = \min(-\tn,\tp) > 0$, we know by the first lemma that $m(-t_0) < \infty$ and $m(t_0) < \infty$. But,
$$
e^{-t_0 X} + e^{t_0 X} = 2 \sum_{n=0}^\infty \frac{t_0^{2n} X^{2n}}{(2n)!} \>,
$$
and the right-hand side is composed of nonnegative terms, so, in particular, for any fixed $k$
$$
e^{-t_0 X} + e^{t_0 X} \geq 2 t_0^{2k} X^{2k}/(2k)! \>.
$$
Now, by assumption $\mathbb E e^{-t_0 X} + \mathbb E e^{t_0 X} < \infty$. Monotonicity of the integral yields $\mathbb E X^{2k} < \infty$. Hence, all even moments of $X$ are finite. Lemma 2 immediately allows us to "fill in the gaps" and conclude that all moments must be finite.
Upshot
The upshot regarding the question at hand is that if any of the
moments of $X$ are infinite or do not exist, we can immediately
conclude that the mgf is not finite in an open interval containing the
origin. (This is just the contrapositive statement of the
proposition.)
Thus, the proposition above provides the "right" condition in order to
say something about the moments of $X$ based on its mgf.
Exponentially bounded tails and the mgf
Proposition: The mgf $m(t)$ is finite in an open interval $(\tn,\tp)$
containing the origin if and only if the tails of $F$ are exponentially
bounded, i.e., $\mathbb P( |X| > x) \leq C e^{-t_0 x}$ for
some $C > 0$ and $t_0 > 0$.
Proof. We'll deal with the right tail separately. The left tail is
handled completely analogously.
$(\Rightarrow)$ Suppose $m(t_0) < \infty$ for some $t_0 > 0$. Then, the right tail of $F$ is exponentially bounded; in other words, there exists $C > 0$ and $b > 0$ such that
$$
\mathbb P(X > x) \leq C e^{-b x} \>.
$$
To see this, note that for any $t > 0$, by Markov's inequality,
$$
\mathbb P(X > x) = \mathbb P(e^{tX} > e^{tx}) \leq e^{-tx} \mathbb E e^{t X} = m(t) e^{-t x} \>.
$$
Take $C = m(t_0)$ and $b = t_0$ to complete this direction of the
proof.
$(\Leftarrow)$ Suppose there exists $C >0$ and $t_0 > 0$ such that
$\mathbb P(X > x) \leq C e^{-t_0 x}$. Then, for $t > 0$,
$$
\mathbb E e^{t X} = \int_0^\infty \mathbb P( e^{t X} > y)\,\mathrm dy
\leq 1 + \int_1^\infty \mathbb P( e^{t X} > y)\,\mathrm dy \leq 1 +
\int_1^\infty C
y^{-t_0/t} \, \mathrm dy \>,
$$
where the first equality follows from a standard fact about the
expectation of nonnegative random variables. Choose any $t$ such that $0 < t < t_0$;
then, the integral on the right-hand side is finite.
This completes the proof.
A note on uniqueness of a distribution given its mgf
If the mgf is finite in an open interval containing zero, then the associated distribution is characterized by its moments, i.e., it is the only distribution with the moments $\mu_n = \mathbb E X^n$. A standard proof is short once one has at hand some (relatively straightforward) facts about characteristic functions. Details can be found in most modern probability texts (e.g., Billingsley or Durrett). A couple related matters are discussed in this answer.
Examples and counterexamples
(a) Lognormal distribution: $X$ is lognormal if $X = e^Y$ for some normal random variable $Y$. So $X \geq 0$ with probability one. Because $e^{-x} \leq 1$ for all $x \geq 0$, this immediately tells us that $m(t) = \mathbb E e^{t X} \leq 1$ for all $t < 0$. So, the mgf is finite on the nonnegative half-line $(-\infty,0]$. (NB We've only used the nonnegativity of $X$ to establish this fact, so this is true from all nonnegative random variables.)
However, $m(t) = \infty$ for all $t > 0$. We'll take the standard lognormal as the canonical case. If $x > 0$, then $e^{x} \geq 1 + x + \frac{1}{2} x^2 + \frac{1}{6} x^3$. By change of variables, we have
$$
\mathbb E e^{t X} = (2\pi)^{-1/2} \int_{-\infty}^\infty e^{t e^u - u^2/2} \,\mathrm d u \>.
$$
For $t > 0$ and large enough $u$, we have $t e^u - u^2/2 \geq t+tu$ by the bounds given above. But,
$$
\int_{K}^\infty e^{t + tu} \,\mathrm du = \infty
$$ for any $K$, and so the mgf is infinite for all $t > 0$.
On the other hand, all moments of the lognormal distribution are finite. So, the existence of the mgf in an interval about zero is not necessary for the conclusion of the above proposition.
(b) Symmetrized lognormal: We can get an even more extreme case by "symmetrizing" the lognormal distribution. Consider the density $f(x)$ for $x \in \mathbb R$ such that
$$
f(x) = \frac{1}{2\sqrt{2\pi}|x|} e^{-\frac{1}{2} (\log |x|)^2} \>.
$$
It is not hard to see in light of the previous example that the mgf is finite only for $t = 0$. Yet, the even moments are exactly the same as those of the lognormal and the odd moments are all zero! So, the mgf exists nowhere (except at the origin where it always exists) and yet we can guarantee finite moments of all orders.
(c) Cauchy distribution: This distribution also has an mgf which is infinite for all $t \neq 0$, but no absolute moments $\mathbb E|X|^p$ are finite for $p \geq 1$. The result for the mgf follows for $t > 0$ since $e^x \geq x^3 / 6$ for $x > 0$ and so
$$
\mathbb E e^{tX} \geq \int_1^\infty \frac{t^3 x^3}{6\pi(1+x^2)} \,\mathrm dx \geq \frac{t^3}{12\pi} \int_1^\infty x \,\mathrm dx = \infty \>.
$$
The proof for $t < 0$ is analogous. (Perhaps somewhat less well known is that the moments for $0 < p < 1$ do exist for the Cauchy. See this answer.)
(d) Half-Cauchy distribution: If $X$ is (standard) Cauchy, call $Y = |X|$ a half-Cauchy random variable. Then, it is easy to see from the previous example that $\mathbb E Y^p = \infty$ for all $p \geq 1$; yet, $\mathbb E e^{tY}$ is finite for $t \in (-\infty,0]$.
That's a good question, but a broad one, so I can't promise I'll say everything about it that should be said. The short answer is that rival techniques differ not in what they can do, but in how neatly they can do it.
Characteristic functions require extra caution because of the role of complex numbers. It's not even that the student needs to know about complex numbers; it's that the calculus involved has subtle pitfalls. For example, I can get a Normal distribution's MGF just by completing the square in a variable-shifting substitution, but a lot of sources carelessly pretend the approach using characteristic functions is just as easy. It isn't, because the famous normalisation of the Gaussian integral says nothing about integration on $ic+\mathbb{R}$ with $c\in\mathbb{R}\backslash\{ 0\}$. Oh, we can still evaluate the integral if we're careful with contours, and in fact there's an even easier approach, in which we show by integrating by parts that an $N(0,\,1)$ distribution's characteristic function $\phi (t)$ satisfies $\dot{\phi}=-t\phi$. But the MGF approach is even simpler, and most of the distributions students need early on have a convergent MGF on either a line segment (e.g. Laplace) or half-line (e.g. Gamma, geometric, negative binomial), or the whole of $\mathbb{R}$ (e.g. Beta, binomial, Poisson, Normal). Either way, that's enough to study moments.
I don't think there's anything you can do only with the MGF, but you use what's easiest for the task at hand. Here's one for you: what's the easiest way to compute the moments of a Poisson distribution? I'd argue it's to use a different technique again, the probability-generating function $G(t)=\mathbb{E}t^X=\exp \lambda (t-1)$. Then the falling Pochhammer symbol $(X)_k$ gives $\mathbb{E}(X)_k=G^{(k)}(1)=\lambda^k$. In general it's usually worth using the PGF for discrete distributions, the MGF for continuous distributions that either are bounded or have superexponential decay in the PDF's tails, and the characteristic function when you really need it.
And depending on the question you're asking, you may instead find it prudent to use the cumulant generating function, be it defined as the logarithm of the MGF or CF. For example, I'll leave it as an exercise that the log-MGF definition of cumulants for the maximum of $n$ $\operatorname{Exp}(1)$ iids gives $\kappa_m=(m-1)!\sum_{k=1}^n k^{-m}$, which provides a much easier computation of the mean and variance (respectively $\kappa_1$ and $\kappa_2$) than if you'd written them in terms of moments.
Best Answer
$M_X^{(2)}(0)$ is not a variance, it is $E(X^2)$. So the variance can be obtained by $$Var(X) = E(X^2) - E(X)^2 = M_X^{(2)}(0) - [M_X^{(1)}(0)]^2 = \frac{1}{\lambda^2}$$