[Math] Two random variables from the same probability density function: how can they be different

measure-theoryprobabilityprobability theoryrandom variables

The definition of $X$ as a random variable according to Wiki is as follows:

$Let (\Omega, \mathcal{F}, P)$ be a probability space and $(E,
> \mathcal{E})$ a measurable space. Then an $(E, \mathcal{E})$-valued
random variable is a function $X\colon \Omega \to E$ which is
$(\mathcal{F}, \mathcal{E})$-measurable. The latter means that, for
every subset $B\in\mathcal{E}$, its preimage $X^{-1}(B)\in
> \mathcal{F}$ where $X^{-1}(B) = \{\omega : X(\omega)\in B\}$. This
definition enables us to measure any subset B in the target space by
looking at its preimage, which by assumption is measurable.

And for real-valued random variables:

In this case the observation space is the real numbers. Recall,
$(\Omega, \mathcal{F}, P)$ is the probability space. For real
observation space, the function $X\colon \Omega \rightarrow
> \mathbb{R}$ is a real-valued random variable if:

$\{ \omega : X(\omega) \le r \} \in \mathcal{F} \qquad \forall r \in
> \mathbb{R}$.

Now in statistics and fields alike, they introduce random variables like $X \sim p(x)$ where $p(x)$ is a probability distribution. My question is if you say that $X\sim p(x)$ and $Y\sim p(x)$ how can these two represent two different random variables (like two different standard normal random variables) when they are sampled from the same $p(x)$, viz. how should you translate this to the formal measure theoretic definition that could differentiate between these two?

Best Answer

On the same $\Omega$, try $X$ uniform on $\{0,1\}$ and $Y=1-X$, then $\{X\ne Y\}=\Omega$.

Edit: Recall that in the probabilistic jargon, a random variable is just a measurable function, here $X:\Omega\to\{0,1\}$ and $Y:\Omega\to\{0,1\}$, that is, for every $\omega$ in $\Omega$, $X(\omega)=0$ or $X(\omega)=1$ and $Y(\omega)=0$ or $Y(\omega)=1$. A notation is that $\{X\ne Y\}=\{\omega\in\Omega\mid X(\omega)\ne Y(\omega)\}$. In the present case, $X(\omega)\ne Y(\omega)$ for every $\omega$ in $\Omega$ hence $\{X\ne Y\}=\Omega$.

Distributions, on the other hand, are probability measures on the target space $\{0,1\}$. Here the distribution $\mu$ of $X$ is uniform on $\{0,1\}$, that is, $\mu(\{0\})=\mu(\{1\})=\frac12$ since $P(X=0)=P(X=1)=\frac12$. Likewise, $P(Y=0)=P(Y=1)=\frac12$ hence $\mu$ is also the distribution of $Y$. Thus, $X$ and $Y$ can both be used to sample from $\mu$ although $X(\omega)=Y(\omega)$ happens for no $\omega$ in $\Omega$.