Almost Sure Occurrence in Probability Space

measure-theoryprobability theoryreal-analysis

Consider an uncountably infinite space, an infinite coin-tossing.

Let $(\Omega,\mathcal{F},\mathbb{P})$ be the probability space. If a set $A\in\mathcal{F}$ satisfies $\mathbb{P(A)=1},$ then we say that event A occurs almost surely.

My Question:

  1. Why is it the case every individual coin-toss sequence has probability zero in this uncountable probability space? In Shreve's explanation, he defines two sets:
    $$A_H=\{\omega\in\Omega_\infty:\omega_1=H\}\\A_T=\{\omega\in\Omega_\infty:\omega_1=T\}.$$

He sets $\mathbb{P}(A_H)=p,\mathbb{P}(A_T)=1-p=q.$
Hence, instead of assigning probability measure to any single sequence, he looks at a set of sequences where it kicks off with either H or T. Why does this set-up makes sense compared to how we deal with coin-tossing in a finite sample space. Usually, when we deal with two coins, we are given an assumption that the coin is fair, so $P(H)=\frac{1}{2}$. However, in the uncountably infinite coin-tossing experiment, any single infinity sequence gets measure $0$ while the set of sequences with certain characteristics (e.g. first flip $H$) gets a strictly positive probability.

  1. What exactly does $\mathbb{P}(A)=1$ almost surely mean? According to Shreve:

"Whenever an event is said to be almost sure, we mean it has probability one, even though it may not include every possible outcome. The outcome or set of outocme not included, taken all together, has probability zero."

Does this mean the event that a coin toss gets at least one tail will happen with certainty? Conversely, when we say $\mathbb{P}(\{\omega\in\Omega_\infty:(\omega_i)_i=H\space\forall\space i\in\mathbb{N}\}=0,$ does this mean something with probability zero CANNOT happen, so it is impossible to obtain an infinite coin-toss sequence with all Heads?

Reference:
Shreve, Steven E. $\textit{Stochastic Calculus for Finance II : Continuous-Time Models}$. Springer, 2008.

Best Answer

For your first question, I think it's best to use the countably infinite coin toss as a limiting case of the finite coin toss. Since the uncountable case is the same just bigger. So consider the space of sequences of coin flips of finite length $n$, call the space $\Omega_n$. Can you convince yourself that each coin flip in this space has the probability: $$ \mathbb{P}(\omega)=\frac{1}{\mbox{Total possible coin flip sequences}} = \frac{1}{2^n} \mbox{ for all } \omega \in \Omega_n$$ So this is saying that each sequence of coin flips $\omega$ has some density in $\Omega_n$. But now, take the limit as $n$ approaches $\infty$ to see that any element $\omega \in \Omega_\infty$ has the probability: $$ \mathbb{P}(\omega)=\frac{1}{\mbox{Total possible coin flip sequences}} = \lim_{n\to\infty} \frac{1}{2^n} = 0 \mbox{ for all } \omega \in \Omega_\infty$$ which is saying that each sequence of flips has no density in the infinite version. The sapce is that big. If you remove finitely many $\omega$, the space is still infinitely large.

In comparison, the reason that $\mathbb{P}(A_H)$ and $\mathbb{P}(A_T)$ are non zero, is because it splits $\Omega_\infty$ in half, that is to say $$ \Omega_\infty = A_H \cup A_T \mbox{, so in particular } \mathbb{P}(A_H) + \mathbb{P}(A_T) =1 $$

For your second question, one way to think about it is that the coin flip sequence with all heads is possible, but as time goes on the likelihood that you get heads again after your first million flips becomes very low (think about conditional probability as sequence gets longer)

Related Question