Probability Theory – Existence of Independent and Identically Distributed Random Variables

independenceprobability distributionsprobability theory

I often see the sentence "let $X_1, X_2, \ldots$ be a sequence of i.i.d. random variables with a certain distribution". But given a random variable $X$ on a probability space $\Omega$, how do I know that there is a sequence of INDEPENDENT random variables of the same distribution on $\Omega$?

Best Answer

The easiest way to show the existence in our case is to construct such a probability space. The intuition is that it is easy to define the joint probability measure on some "simple" sets using the iid property and then one leverages the celebrated Caratheodory extension theorem.

More precisely, let $(E,\mathcal E)$ be the measurable space where our random variables are supposed to take values and let $\mu$ be a probability measure on $(E,\mathcal E)$ representing the distribution of such random variables. Define $\Omega = E^{\mathbb N_0}$ to be the space of countable trajctories over $E$ and let $\mathcal F$ be its product $\sigma$-algebra. Define the probability measure $\mathsf P$ on $(\Omega,\mathcal F)$ just based on the independence, i.e. for any $A_0,\dots,A_N\in \mathcal E$ we put $$ \mathsf P(X_0\in A_0,\dots,X_n\in A_n):=\mu(A_0)\times \dots\times \mu(A_n). $$ So far the measure $\mathsf P$ is only defined on the collection $\mathcal A$ of measurable rectangles, i.e. subset of $\Omega$ of the form $A_0\times\dots\times A_n\times \Omega\times\Omega\times\dots$ where $A_i\in \mathcal E$. Finite unions of elements of $\mathcal A$ form the algebra, say $\mathcal B$. Clearly, $\mathsf P$ is a finite pre-measure on $\mathcal B$ and hence by the Caratheodory extension theorem we obtain the unique measure $\mathsf P$ on $\mathcal F = \sigma(\mathcal B)$.

As Ahriman has pointed out, if you are given a random variable $X:\Omega\to E$ it may not be possible to construct the whole sequence on $\Omega$ as the latter may be quite a poor space, so you would have to go for a richer space. For example, $E$ always can be considered as a sample space for the distribution over it, by taking $\mathrm id_E$ being a random variable. But in case $E = \{a,b\}$ and you have $\mu(a) = 0.4$ and $\mu(b) = 0.6$ it is only possible to construct one and only one random variable defined on $E$ which has $\mu$ as a distribution.

Related Question