Are the inverse cumulative distribution functions of two real-valued random variables always independent

measure-theoryprobability theoryrandom variables

Say we have two real-valued random variables $X,Y$ over the probability space $(\mathbb{R}, \Sigma_{\mathbb{R}}, \mu)$ where $\mu$ is uniform on $[0,1]$. Let's use $\phi_x$ and $\phi_y$ to denote the CDFs of $X$ and $Y$ respectively.

By the inverse probability transform, the functions $\phi_x^{-1}$ and $\phi_y^{-1}$ accept samples drawn from $(\mathbb{R}, \Sigma_{\mathbb{R}}, \mu)$ and return samples drawn from $(\mathbb{R}, \Sigma_{\mathbb{R}}, X_*\mu)$ and $(\mathbb{R}, \Sigma_{\mathbb{R}}, Y_*\mu)$ respectively.

Therefore, my understanding is that in the case that $\phi_x^{-1}$ and $\phi_y^{-1}$ are measurable, they themselves are random variables. How is the independence/dependence of these random variables related to the independence/dependence of $X$ and $Y$? Are these random variables always independent?

(This question Are right continuous functions measurable? suggests that $\phi_x$ and $\phi_y$ are always measurable. I imagine that the measurability of $\phi_x^{-1}$ and $\phi_y^{-1}$ is a separate consideration.)

Best Answer

They are not independent. In fact, they'll be organized by their percentiles. I'm going to change our notation a little bit to try and clarify this point.

Let our underlying probability space be $([0,1],\mathcal{B}[0,1],\lambda)$, the unit interval with the Borel $\sigma$-field and the Lebesgue measure.

Now, with your question, $X$ and $Y$ really do nothing more than provide probability laws. (You introduced them as random variables, but immediately took their CDFs, so it may be best to think of $X$ and $Y$ as given distributions over $\mathbb{R}$, rather than the functions from $[0,1]$ to $\mathbb{R}$.) So, we are given two CDFs, $F_X, F_Y$.

We note that $F_X^{-1} : [0,1] \rightarrow \mathbb{R}$ is a random variable with the desired distribution. Similarly, $F_Y^{-1}$. And, furthermore, when we pick $\omega \in [0,1]$, $F_X^{-1}(\omega)$ actually evaluates to the $\omega$ percentile; e.g. if $\omega = 0.5$, $F_X^{-1}(\omega)$ and $F_Y^{-1}(\omega)$ both would be equal to the median of the distributions.

For even more concreteness, consider the trivial case where $X$ and $Y$ are uniform on $[0,1]$. In this case, $F_X^{-1}(\omega) = F_Y^{-1}(\omega) = \omega$ and they exactly equal each other. Definitely not independent!