[Math] Independence of Coin Tosses

probabilityprobability theory

Let $\Omega = \{HH, HT, TH, TT\}$ represent the possible sequence of outcomes from tossing a coin twice. It seems common to say, "Let $P(H) = p$, where $P(H)$ is the probability of getting a heads on either toss. Then, since these are independent events, $P(HH) = p^2$."

However, the event $\{H\}$ is not measurable, so setting $P(H)$ is nonsense. We also can't immediately say, "Getting a heads on the first toss and a heads on the second toss are independent," because we would need something like $P(H_1 \cap H_2) = P(H_1)P(H_2)$, where $H_1, H_2$ are a heads on the first and second tosses, respectively. But, again we don't such events.

It seems the rigorous way to go about this would be to first assign probabilities to events in such a way that they end up being independent, say,
$$
P(HH) = p^2, \quad P(HT) = p(1-p), \quad P(TH) = (1-p)p,\quad P(TT) = (1-p)^2.
$$
Then, letting
$$A = \{H \text{ on first toss}\} = \{HH, HT\}, \\
B = \{H \text{ on second toss}\} = \{HH, TH\},
$$
the event we are interested in is $\{HH\} = \{A \cap B\}$. So,
$$
P(A \cap B) = P(\{HH\}) = p^2, \qquad P(A)P(B) = (p^2 + p(1-p))(p^2 + (1-p)p) = p^2,
$$
and indeed, getting a heads on the first toss is independent of getting a heads on the second toss.

I am just confused since we seem to be working backwards from "intuitive" probability. Intuitively, I know what is meant by, "$P(H) = p$, and independence implies $P(HH) = p^2$," but once I lay it out rigorously this doesn't work. Instead, it seems like we have to assign probabilities very carefully to the sequences of tosses so that the tosses end up being independent, rather than assuming they are independent from the beginning and start computing probabilities right away. Have I overlooked something, or is my construction above really the right way to do it?

Best Answer

You do have events $H_1$ and $H_2$, namely, they are the events you labeled $A$ and $B$: \begin{align} H_1 & = \{ HH, HT \}, \\ H_2 & = \{ HH, TH \}. \\ \end{align}

In general, if $HH$, $HT$, $TH$, and $TT$ are simply the labels of events in some arbitrary probability space $\Omega$, these events might or might not be independent. The sum of their probabilities merely needs to be $1$.

We do usually suppose, however, that a good model of two successive tosses of the same coin would be a probability space for which $P(H_1) = P(H_2)$ and for which $H_1$ and $H_2$ are independent. The fact that $P(HH) = P(H_1 \cap H_2) = P(H_1)P(H_2) = (P(H_1))^2$ follows immediately. Let $P(H_1) = p$ as a more convenient way of writing $P(H_1)$, and then $P(HH) = p^2$, $P(HT) = p(1-p)$, and so forth by simple algebra.

Related Question