Define independence of random variables from different probability spaces to the same output space

measure-theoryprobability theoryrandom variables

Let $X, Y$ be random variables from a probability space $(\Omega, \mathcal F, \mathbb P)$ to a Banach space $\mathcal X$. We endow $\mathcal X$ with the induced Borel-$\sigma$ algebra. Then we can define the independence of $X$ and $Y$ by

$X$ and $Y$ are independent if $\mathbb P [X^{-1} (A) \cap Y^{-1} (B)] = \mathbb P [X^{-1} (A)] \cdot \mathbb P[ Y^{-1} (B)]$ for all Borel sets $A, B$ in $\mathcal X$.

Assume $Z$ is a random variable from another probability space $(\Omega', \mathcal F', \mathbb P')$ to $\mathcal X$. How can we extend the definition of independence between $X$ and $Z$ to this setting?

In the simplest case, $(\Omega', \mathcal F') = (\Omega, \mathcal F)$ and $\mathbb P \neq \mathbb P'$, i.e., $X$ and $Z$ have different probability distributions.

Best Answer

Define a new probability space $(\Omega'',\mathcal{F}'',\mathsf{P}'')$, where $\Omega''=\Omega \times \Omega'$, $\mathcal{F}''=\mathcal{F}\otimes\mathcal{F}'$, and $\mathsf{P}''$ is such that $$ \mathsf{P}''(A\times B)=\mathsf{P}(A)\mathsf{P}'(B) $$ for all $A\in\mathcal{F}$ and $B\in \mathcal{F}'$. By construction, $Z$ and $X$ are independent.