Solved – How to check if functions of i.i.d random variables are dependent or independent

independencerandom variable

i'm new to this forum and the science of statistic.This is my question:

Let's say that we have two i.i.d random variables X and Y, which both follow a Rayleigh distribution. Then, we define two new random variables U and V as follows: $U = \frac{X^{2}}{Y^{2}+a}, V = \frac{Y^{2}}{X^{2}+a}, $ where $a$ is a constant. Are $U$ and $V$ are independent or dependent?

Intuitively speaking, i believe that the two are dependent, as i can write $X^{2} = U(Y^{2}+a)$ and substitute this into $V$ to get $V=\frac{Y^{2}}{U(Y^{2}+a)+a}$. But another man told me that they are independent since they are created independently from the same distribution. So i'm pretty confuse right now.

Best Answer

Independence of random variables $U$ and $V$ implies the distribution of $U$ is the same regardless of what value $V$ might have.

In some cases, checking independence requires working out the joint distribution of $(U,V)$. But if you suspect there might be lack of independence, it suffices to find enough values of $V$ for which the conditional distribution of $U$ differs.

("Enough" means there has to be nonzero probability of achieving values of $V$ where the conditional distribution of $U$ varies.)

In this case, algebra tells us that

$$Y^2 = V(X^2 + a),$$

whence

$$\frac{1}{U} = \frac{Y^2+a}{X^2} = \frac{V(X^2+a)+a}{X^2} = V + a \frac{V+1}{X^2}.\tag{1}$$

With the Rayleigh distribution, $X^2$ has positive probability density for all $X^2 \gt 0.$ As $X^2$ ranges through all positive numbers, the right hand side of $(1)$ ranges over the interval $(V, \infty)$ when $a(V+1)\gt 0$, over the interval $(-\infty,V)$ when $a(V+1) \lt 0$, and otherwise is fixed at $V$. This immediately implies that the range of values of $U$ that have some chance of happening depends on $V$, and that we cannot get rid of this problem by eliminating a set of $V$ having just zero probability.

Because the ranges of possible $U$ differs with $V$, the conditional probability distribution of $U$ obviously varies with $V$, too. Therefore $U$ and $V$ are not independent.


The "other man" can be confuted by considering a simplified version of his assertion where there is just one variable, say $X$. We may "independently" construct many random variables from $X$, such as $U=2X$ and $V=4X$, but I hope it's obvious the resulting variables are not themselves independent. In this example, for instance, $V=2U$ exhibits the dependence explicitly. The same argument applies to multivariate random variables and for the same reasons.

Finally, there are some special cases where sets of variables constructed from the same "core" of independent variables are independent. The best-known (and arguably most important) example consists of an orthogonal transformation of independent and identically distributed Normal variables: the resulting variables are still independent and identically distributed.

Related Question