Show that $X_n$ converges almost surely to $0$

convergence-divergencerandom variables

Let $(X_n)_{n \geq 1}$ be a sequence of independent random variable such that
$\mathbb{P}(X_n = 0) = 1 – \frac{1}{n^2}$ and $\mathbb{P}(X_n = n^3)=\frac{1}{n^2}$.

I would like to show that $X_n \xrightarrow{a.s.} 0$.

By definition: To show that $X_n$ converges almost surely to $0$ the following has to hold:

$\mathbb{P}[\{\omega \in \Omega: 0 = \lim_{n\to \infty} X_n (\omega)\}] =1$.

I have already shown that $X_n \xrightarrow{\mathbb{P}}0$. Intuitevely I would have said that from this it follows that $X_n \xrightarrow{a.s}0$ but this is only true the other way, right?

My thought was that since $\lim_{n \to \infty} \mathbb{P}[|X_n| > \epsilon] = 0$ it has to hold as well that $\lim_{n \to \infty} \mathbb{P}[|X_n(\omega)| > \epsilon] = 0$. Can I conclude from that the claim?

Best Answer

By the [first] Borel-Cantelli lemma, since $$\sum_{n=1}^\infty\mathbb{P}[X_n \neq 0] = \sum_{n=1}^\infty \frac{1}{n^2} < \infty,$$ we have $\mathbb{P}[\limsup_n[X_n \neq 0]] = 0.$ There are a few ways to see this implies $X_n \to 0$ almost surely, but perhaps the most natural is to first use $$\mathbb{P}[\liminf_n A_n] = 1 - \mathbb{P}[\limsup_n (\Omega\setminus A_n)]$$ with $A_n = \{\omega \in \Omega : X_n(\omega) = 0\}$ to conclude $$\mathbb{P}[\liminf_n[X_n=0]] = 1.$$ That is, for almost all $\omega$, there is an $N_\omega$ such that $n > N_\omega$ implies $X_n(\omega) = 0$. Since any sequence that is eventually constant converges to that constant, this shows that $X_n(\omega) \to 0$ for almost all $\omega$, so $X_n \to 0$ almost surely.


There were some issues in the comments with distinguishing convergence almost surely from convergence in probability. As a general rule, you can never use convergence in probability by itself to show convergence almost surely.

  • Rather, the closest you typically can get by itself is to show $$X_n \xrightarrow{\mathbb{P}}X \\\iff \\ \forall \text{ subsequence } (X_{k_n})_{n\geq 1} \subseteq (X_n)_{n \geq 1}, \exists \text{ subsequence } (X_{m_n})_{n\geq 1} \subseteq (X_{k_n})_{n\geq 1} \text{ such that } X_{m_n} \xrightarrow{\text{a.s.}}X$$ For example, if you know $X_n \xrightarrow{\mathbb{P}} X$ and that $X_n \xrightarrow{\text{a.s}} Y$, then you can conclude $X_n \xrightarrow{\text{a.s.}} X$, but without that something extra, we cannot get from convergence in probability to convergence almost surely.

  • Another way to see the connection is to note that for an arbitrary $\varepsilon > 0$, we have [e.g. by Fatou's lemma] $$\mathbb{P}[\liminf_n[|X_n-X| < \varepsilon]] \leq \liminf_n \mathbb{P}[|X_n-X| < \varepsilon]$$ so by taking limits as $\varepsilon \to 0$, $$\lim_{\varepsilon \to 0^+}\mathbb{P}[\liminf_n[|X_n-X| < \varepsilon]] \leq \lim_{\varepsilon \to 0^+} \liminf_n \mathbb{P}[|X_n-X| < \varepsilon]$$ The quantity on the left equals $1$ if and only if $X_n \to X$ almost surely. The quantity on the right equals $1$ if and only if $X_n \to X$ in probability. Since both quantities are $\leq 1$, we may directly show that almost sure convergence implies convergence in probability, but assuming the convergence is in probability tells us nothing about the probability that $X_n$ converges.

  • For a concrete example to work on, we may change your problem slightly: Suppose $(Y_n)_{n\geq 1}$ is a sequence of independent random variables such that $\mathbb{P}[Y_n = 0] = 1 - \frac{1}{n}$ and $\mathbb{P}[Y_n = n^2] = \frac{1}{n}$. Note that $Y_n \xrightarrow{\mathbb{P}} 0$. Then use the [second] Borel-Cantelli lemma to show $\mathbb{P}[Y_n \text{ converges}] = 0$.