Kolmogorov’s 0-1 law, tail events and bounded sequence of random variables.

measure-theoryprobability theory

Let $(X_n)_{n\in\mathbb{N}}$ be a sequence of i.i.d. $\mathbb{R}$-valued r.v.s such that for some $M \geq 0$ it follows that $P[|X_n| \leq M] = 1$ for all $n$. Define $Y_n = \frac{1}{n}\sum_{i=1}^nX_i$, and $L = \lim\sup_{n}Y_n$, and $A = \{\lim_nY_n \text{ exists}\}$. Then my reading material makes the following claim

There exists some $c \in [-M, M]$ s.t. $P[L = c] = 1$, and $P[A] \in \{0, 1\}$

The proof for the claim is as follows

For any $q \in \mathbb{Q}$ define the tail event $A_q = \{L\geq q\}$. By Kolmogorov's 0-1 law, the probability of each of these tail events is either 0 or 1. Thus, there exists some $c = \sup\{q:P[A_q] = 1\} = \inf\{q:P[A_q] = 0\}$. Since $\mathbb{Q}$ is countable, $P[L\geq c] = P[L\leq c] = 1$, and so $P[L = c] = 1$. Finally $c \in [-M, M]$, since $P[L\in [-M, M]] = 1$.

The main points I'm struggling to understand here are 1.) How do we know from the Kolmogorov's 0-1 law that there exists some constant $c$ s.t. $c = \sup\{q:P[A_q] = 1\} = \inf\{q:P[A_q] = 0\}$, 2.) why does it follow from the countability of $\mathbb{Q}$ that $P[L\geq c] = P[L\leq c] = 1$?

Best Answer

I will provide a similar argument. Kolmogorov's 0-1 law tells use that the tail $\sigma$-algebra is trivial in the sense that $\forall A\in \mathscr{F}_\infty,\,P(A)\in \{0,1\}$. In general, if a $\sigma$-algebra $\mathscr{A}$ is in that sense trivial, then any $\mathscr{A}$-measurable random variable $Y$ is then equal to a constant a.s., that is $P(Y=c)=1$ for some $c$. To see this, notice that $F_Y(y)=P(Y\leq y)\in \{0,1\}$ since $\{Y\leq y\}\in \mathscr{A}$. As $F_Y$ is nondecreasing, there exists one $c$ s.t. $F_Y$ 'jumps' to $1$ at $c$. This implies that $P(Y=c)=1$.