Probability Theory – Existence of Independent and Identically Distributed Random Variables

probability theoryrandom variables

In probability theory we often used the existence of a sequence $(X_n)_n$ of independent and identically distributed random variables. This was already discussed here. One of the answers says:

As Ahriman has pointed out, if you are given a random variable $X:\Omega\to E$ it may not be possible to construct the whole sequence on $\Omega$ as the latter may be quite a poor space, so you would have to go for a richer space.

My question is the following: How could I "enrich" my given probability space $\Omega$ such that I can ensure the existence of iid random variables on this probability space?

My idea was the following: Assume that I have a given probability space $(\Omega_1,\mathcal{A}_1,\mathbb{P}_1)$ and a random variable $X:\Omega_1 \to E$. Now I can construct a probability space $(\Omega_2,\mathcal{A}_2,\mathbb{P}_2)$ such that there exists a sequence of iid random variables $X_n: \Omega_2 \to E$. Let $(\Omega,\mathcal{A},\mathbb{P}) := (\Omega_1,\mathcal{A}_1,\mathbb{P}_1) \otimes (\Omega_2,\mathcal{A}_2,\mathbb{P}_2)$ the product space, then

$$X_n'(w_1,w_2) := X_n(w_2) \qquad \qquad X'(w_1,w_2) := X(w_1)$$

would still fulfill $X' \sim X$, $X_n' \sim X_n$ and the random variables $X_n'$ would be independent. Is this correct…?

Best Answer

I cannot comment yet, so I'm posting this as an answer.$\def\ci{\perp\!\!\!\perp}$

This is probably not what you were asking, but I think it's interesting and relevant enough to post.

It's known possible to construct arbitrary distributions from uniform variables. Furthermore, given a $\mathcal U[0,1]$ variable, it's possible to produce from it an i.i.d sequence of such variables, which can then be used to obtain more general distributions. We can always extend a space to obtain such variables by$$\hat{\Omega}=\Omega\times[0,1]\text{, }\hat{\mathscr{A}}=\mathscr{A}\otimes\mathscr{B}\text{, }\hat{P}=P\otimes\lambda $$ in which case $\vartheta(\omega,t):= t$ is $\mathcal{U}[0,1]$ and $\vartheta\ci \mathscr{A}$.

For more details see Kallenberg - Foundations of Modern Probability (2002), in particular the discussion before Theorem 6.10 (transfer).

Related Question