[Math] I.i.d. random variables and independent $\sigma$-algebras

measure-theoryprobability theory

Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space and let $X,Y: \Omega \to \mathbb{R}$ be i.i.d. random variables such that $\mathbb{P}(X=1)=\mathbb{P}(X=-1)=\frac{1}{2}$. Define $XY: \Omega \to \mathbb{R}$ by $XY(\omega) = X(\omega)Y(\omega)$ for all $\omega \in \Omega$. Now, I want to prove that the two random variables $X$ and $XY$ are independent.

I know that the two random variables are independent if $\sigma(X)$ and $\sigma(XY)$ are independent. I know that $\sigma(X)$ and $\sigma(XY)$ are independent if
\begin{align}
(*) \qquad \forall S \in \sigma(X), \forall T \in \sigma(XY): \mathbb{P}(S \cap T) = \mathbb{P}(S)\mathbb{P}(T).
\end{align}
Since $X$ and $Y$ are i.i.d. random variables, I know that the events $X^{-1}((\infty,a])$ and $Y^{-1}((\infty,b])$ are independent for all $a,b\in \mathbb{R}$. How to conclude that equation $(*)$ holds?

Subsequently, it could be shown that $Y$ and $XY$ are independent. However, the three random variables $X$, $Y$ and $XY$ are not independent. Why?

Best Answer

I will first answer your last question: given $X$ and $Y$ (as a vector), the random variable $XY$ is completely determined. So, you can easily finda three sets $A,$ $B$ and $C$ such that $\mathbf{P}(XY \in A \mid X \in B, Y \in C) \neq \mathbf{P}(XY \in A).$

For your first question, I will give you some results, I am quite unsure which one is the one you are missing.

Suppose $X$ and $Y$ are two discrete random variables and let $\Sigma(X)$ and $\Sigma(Y)$ the corresponding sigma algebras they generate.

  • If $\Sigma(X)$ and $\Sigma(Y)$ are independent, every $A \in \Sigma(X)$ is independent of every $B \in \Sigma(Y);$ in particular when $A = \{X = x_n\}$ and $B = \{Y = y_m\}.$
  • If every $\{X = x_n\}$ is independent of every $\{Y = y_m\},$ then for every pair of intervals $I,J$ the events $\{X \in I\}$ and $\{Y \in J\}$ are independent for $\mathbf{P}(X \in I, Y \in J\} = \sum\limits_{x_n \in I, y_m \in J} \mathbb{P}(X = x_n, Y = y_m) = \sum\limits_{x_n \in I, y_m \in J} \mathbb{P}(X = x_n) \mathbb{P}(Y = y_m) = \sum\limits_{x_n \in I} \mathbb{P}(X = x_n) \sum\limits_{y_m \in J} \mathbb{P}(Y = y_m) = \mathbb{P}(X \in I) \mathbb{P}(Y \in J).$ Then, $\Sigma(X)$ and $\Sigma(Y)$ are independent (let me clarify this last claim below).

For a function $f$ to be measurable from the measurable space $(X, \mathbf{X})$ to the measurable space $(Y, \mathbf{Y})$ it is necessary and sufficient that the set $f^{-1}(\mathbf{Y}) := \{f^{-1}(B) \mid B \in \mathbf{Y}\}$ be contained in $\mathbf{X}.$ This is just restating the definition of measurability. Now, $f^{-1}(\mathbf{Y})$ is a sigma algebra because preimage respect operations and so on. If $\mathbf{Y}$ is the sigma algebra generated by some set $\mathsf{Y} \subset 2^Y$ then $f^{-1}(\mathbf{Y})$ is the sigma algebra generated by $f^{-1}(\mathsf{Y})$ (since the Borel sigma algebra is generated by intervals of the form $(-a, \infty]$, you can apply this to $\mathbf{Y} = \text{Borel sigma algebra}$ and $\mathsf{Y} = \text{Intervals of the form } (-\infty, a]$). Since $\mathsf{Y} \subset \mathbf{Y}$ it is true that $f^{-1}(\mathsf{Y}) \subset f^{-1}(\mathbf{Y})$ and hence, as the right hand side is a sigma algebra, the sigma algebra generated by the left hand side will be contained in $f^{-1}(\mathbf{Y}).$ Let $\mathcal{Y}$ be the sigma algebra generated by $f^{-1}(\mathsf{Y}),$ I just established that $\mathcal{Y} \subset f^{-1}(\mathbf{Y});$ to prove the reverse inclusion consider the set $\bar{\mathsf{Y}}$ of subsets $B$ of $Y$ such that $f^{-1}(B) \in \mathcal{Y};$ so $\mathsf{Y} \subset \bar{\mathsf{Y}}.$ Since preimage behaves well with set operations, $\bar{\mathsf{Y}}$ is a sigma algebra that contains $\mathsf{Y},$ thus it contains the sigma algebra generated by the latter set, that is $\mathbf{Y} \subset \bar{\mathsf{Y}}.$ Therefore, for every $B \in \mathbf{Y},$ $f^{-1}(B) \in \mathcal{Y},$ thus $\mathcal{Y} = f^{-1}(\mathbf{Y}).$