I'm trying to prove the Cramér-Lundberg inequality, which deals with the probability of ruin for an insurance company given a certain initial capital. Specifically, if $Y_1, Y_2, \ldots$ are the differences between the premiums and payments of an insurance company at time $n$, and $X_n = Y_1 + \cdots + Y_n$ is the total gain of the insurance company at time $n$, and $k_0$ is the initial capital, then the probability of eventual ruin $p(k_0)$ satisfies the Cramér-Lundberg inequality:
$$
p(k_0) := \mathbb P\left[ \inf\left\{ X_n + k_0 : n \in \mathbb N_0 \right\} < 0 \right] \leq \exp\left(\theta^* k_0\right)
$$
where $\theta^* < 0$ satisfies $\log\left( \mathbb E\left[ \exp(\theta^* Y_1 )\right]\right) = 0$.
My reference text proposes proving this in the following steps. Suppose $Y_1, Y_2, \ldots$ are i.i.d. integrable random variables that are not almost surely constant. Let $X_n = Y_1 + \cdots + Y_n$, and suppose there is $\delta > 0$ so that $\mathbb E\left[\exp\left(\theta Y_1 \right)\right] < \infty$ for all $\theta \in (-\delta, \delta)$. Define $\psi : (-\delta, \delta) \to \mathbb R$ by $$\psi(\theta) := \log \left(\mathbb E\left[\exp\left(\theta Y_1 \right)\right]\right)$$
and define the process $Z^\theta = \left(Z^\theta_n\right)_{n \geq 1}$ by $Z_n^\theta := \exp\left(\theta X_n – n\psi(\theta)\right)$. We are suggested to show the following:
- $Z^\theta$ is a martingale for all $\theta \in (-\delta, \delta)$.
- $\psi$ is strictly convex.
- $\mathbb E\left[\sqrt{Z_n^\theta}\right] \xrightarrow{n \to \infty} 0$ for $\theta \neq 0$.
- $Z_n^\theta \xrightarrow{n \to \infty} 0$ almost surely.
- If $\mathbb E[Y_1] > 0$ and if $\psi(\theta) = 0$ has a nonzero solution $\theta^*$, then $\theta^* < 0$.
- Prove that if such a $\theta^* < 0$ exists, and if $\mathbb E[Y_1] > 0$, then $p(k_0) \leq \exp\left(\theta^* k_0\right)$.
I've been able to show 1, 4 (from 3), and 5 (from 2). I'm close for 2 but having some trouble: we need to show $\psi(\lambda\theta + (1-\lambda)\phi) < \lambda \psi(\theta) + (1-\lambda)\psi(\phi)$ whenever $\theta \neq \phi$ and $\lambda \in (0,1)$. Clearly we have $$\psi(\lambda\theta + (1-\lambda)\phi) = \log\mathbb E\left[ \exp\left(\lambda\theta Y_1 + (1-\lambda)\phi Y_1 \right)\right]$$ Meanwhile by Jensen's inequality and concavity of $x \mapsto x^\lambda$ for $0 < \lambda < 1$,
\begin{align*}
\lambda \psi(\theta) + (1-\lambda)\psi(\phi) &= \log \left( \mathbb E\left[\exp (\theta Y_1) \right]^\lambda\right) + \log\left(\mathbb E\left[ \exp(\phi Y_1)\right]^{1-\lambda}\right) \\
&\geq \log\left(\mathbb E\left[\exp\left(\lambda \theta Y_1\right)\right]\right) + \log\left(\mathbb E\left[\exp\left((1-\lambda)\phi Y_1\right)\right]\right) \\
&= \log\left(\mathbb E\left[\exp\left(\lambda \theta Y_1\right)\right]\mathbb E\left[\exp\left((1-\lambda)\phi Y_1\right)\right]\right).
\end{align*}
If I could show $\mathbb E\left[\exp\left(\lambda \theta Y_1\right)\right]\mathbb E\left[\exp\left((1-\lambda)\phi Y_1\right)\right] \geq \mathbb E\left[\exp\left(\lambda \theta Y_1\right)\exp\left((1-\lambda)\phi Y_1\right)\right]$,that would solve this problem, but this is very far from obvious to me (especially since the integrands aren't independent).
Then 3 and 6 I'm really stuck on. Any help on any of these three would be greatly appreciated. Note I would prefer not to use martingale convergence theorems because these results have yet to appear in my textbook; I can only work with square integrable martingales and stopping times.
Best Answer
These are my ideas: