Why is there independence in this proof of the existence of a countable sequence of iids

independenceprobability theoryrandom variables

Consider this

Proposition 2.4.1. Let $(\Omega, \mathcal{F}, \mathbb{P})$ be the probability space of Lebesgue measure on the Borel subsets of $(0,1)$. Let $(F_n: n \in \mathbb{N})$ be a sequence of distribution functions. Then there exists a sequence $(X_n: n \in \mathbb{N})$ of independent random variables on $(\Omega, \mathcal{F}, \mathbb{P})$ such that $X_n$ has distribution function $F_{X_n}=F_n$ for all $n$.

Proof: Choose a bijection $m: \mathbb{N}^2 \rightarrow \mathbb{N}$ and set $Y_{k, n}=R_{m(k, n)}$, where $R_m$ is the $m$th Rademacher function. Set
$$Y_n=\sum_{k=1}^{\infty} 2^{-k} Y_{k, n} . $$
Then $Y_1, Y_2, \ldots$ are independent and, for all $n$, for $i 2^{-k}=0 . y_1 \ldots y_k$, we have
$$ \mathbb{P}(i 2^{-k}<Y_n \leq(i+1) 2^{-k})=\mathbb{P}(Y_{1, n}=y_1, \ldots, Y_{k, n}=y_k)=2^{-k} $$
so $\mathbb{P}(Y_n \leq x)=x$ for all $x \in[0,1]$. Set
$$ G_n(y)=\inf \{x: y \leq F_n(x)\} $$
then, by Lemma 2.2.1, $G_n$ is Borel and $G_n(y) \leq x$ if and only if $y \leq F_n(x)$. So, if we set $X_n=G_n(Y_n)$, then $X_1, X_2, \ldots$ are independent random variables on $\Omega$ and
$$ \mathbb{P}(X_n \leq x)=\mathbb{P}(G_n(Y_n) \leq x)=\mathbb{P}(Y_n \leq F_n(x))=F_n(x) . $$

Question: I am struggling to see why are $Y_n$ independent. I can see why this would be true if the sums were finite as independence would follow from the fact that the Rademacher functions are independent and then talk about finite intersections. However, I do not see how one argues for the independence of the infinite sums.

Best Answer

For $n\geq 0$, you can show that $$ S^{n}_N:=\sum_{k=1}^{N}2^{-k} Y_{k, n} $$ converges almost, surely. Its limit is $Y_n$, this even this way that $Y_n$ is defined. Now, take two bounded measurable functions $f$ and $g$, by Lebesgue theorem $$ \mathbb{E}\left[f(Y_{1})g(Y_2)\right]=\lim\limits_{N\to \infty}\mathbb{E}\left[f(S^{1}_N)g(S^2_N)\right]. $$ However, you seem to agree that $S^{1}_N$ and $S^{2}_N$ are independent (as finite sums of independent random variables). Hence, $$ \mathbb{E}\left[f(Y_{1})g(Y_2)\right]=\lim\limits_{N\to \infty}\mathbb{E}\left[f(S^{1}_N)g(S^2_N)\right]=\lim\limits_{N\to \infty}\mathbb{E}\left[f(S^{1}_N)\right]\mathbb{E}\left[g(S^2_N)\right]=\mathbb{E}\left[f(Y_1)\right]\mathbb{E}\left[Y_2\right], $$ where the last equality follows again from Lebesgue theorem. You can extend this to show the independence of the whole family $(Y_n)_n$.

Related Question