Probability – Random Variables on the Same Space with Different Distributions

measure-theoryprobabilityprobability distributions

Consider the real-valued random variable $X$ and suppose it is defined on the probability space $(\Omega, \mathcal{A}, \mathbb{P})$. Assume that $X \sim N(\mu, \sigma^2)$. This means that
$$
(1)\text{ } \mathbb{P}(X\in [a,b])=\mathbb{P}(\{w \in \Omega \text{ s.t } X(\omega)\in [a,b]\})=\frac{1}{2}\left(1+\frac{1}{\sqrt{\pi}}\int_{-(\frac{x-\mu}{\sigma \sqrt{2}})}^{(\frac{x-\mu}{\sigma \sqrt{2}})}e^{-t^2}dt\right)
$$
In several books I found that we can also say that $X$ is distributed according to $\mathbb{P}$.

Now suppose that we add another random variable $Y$
on the same probability space and assume $Y \sim U([0,1])$. This means that, for $0\leq a\leq b \leq 1$
$$
(2)\text{ } \mathbb{P}(Y \in [a,b])=\mathbb{P}(\{w \in \Omega \text{ s.t } Y(\omega)\in [a,b]\})=b-a
$$

Question: the fact that $X$ and $Y$ are defined on the same probability space but have different probability distribution is a contradiction? What is the relation between $\mathbb{P}$, the normal cdf and the uniform cdf? Can we say that both $X$ and $Y$ are distributed according to $\mathbb{P}$ even if they have different distributions?

Best Answer

Admittedly, a holistic answer to your questions would require more measure-theoretic machinery than what follows. However, I will attempt to give you succinct responses that you might find helpful.

So, let the real-valued random variables $X, Y$ be defined on the same probability space $(\Omega, \Sigma, \mathbb P)$.

1) $X$ and $Y$ are measurable so that, for instance, for the interval of real numbers $[a,b]$, we necessarily have $\left\{X\in[a,b]\right\}, \left\{Y\in[a,b]\right\} \in \Sigma$, while we need not have $$\{X\in[a,b]\} = \{Y\in[a,b]\}.$$

2) Because of 1) above, we need not have $$\mathbb P\left\{X\in[a,b]\right\} = \mathbb P\left\{Y\in[a,b]\right\}.$$

3) Note that, because we may define the probability measure $\mathbb P_X(B):=\mathbb P\{X \in B\}$ over Borel sets $B \in \mathcal B(\mathbb R)$, we can speak of $X$ being distributed according to $\mathbb P_X$. In so doing, we are thinking of $X$ in terms of the probability space $(\mathbb R, \mathcal B(\mathbb R), \mathbb P_X)$, not the probability space $(\Omega, \Sigma, \mathbb P)$. In your example, since $X\sim N(\mu, \sigma^2)$, we have an integral representation of $\mathbb P_X$ with respect to the Lebesgue-measure, so that $$ \mathbb P_X([a,b])=\frac{1}{\sigma\sqrt{2\pi}}\int_{-\infty}^{\infty}{\bf{1}}_{[a,b]}(x)e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}\mathrm dx\,. $$

A similar development holds for the uniform random variable $Y$.

4) All of the foregoing is just one way of proceeding; there are alternatives. For instance, one may define $X, Y$ over the same measurable-space $(\Omega, \Sigma)$, but different probability-spaces, $(\Omega, \Sigma, \mathbb P_X)$ and $(\Omega, \Sigma, \mathbb P_Y)$, with different probability measures.