[Math] Proving two random variables are uncorrelated but not independent

correlationindependenceprobabilityrandom variables

I'm trying to find two random variables that are not independent –

there exist $a$ and $b$ such that $P(X=a \land Y=b)\neq P(X=a)P(Y=b)$

but, $\mathbb{E}[XY]=\mathbb{E}[X] \mathbb{E}[Y]$

I'm trying to understand some of the examples of this that I have seen, but they almost always define two random variables $X$ and $Y$ either in terms of each other, or in terms of a third random variable. This seems to be at odds with the formal definition of a random variable that I am accustomed to. That is, if we have a probability space $(\Omega,P)$, then a random variable $X$ is a mapping $X:\Omega \rightarrow \mathbb{R}$ (or some other space, but let's use $\mathbb{R}$ for simplicity).

So to say something like $\Omega=\{-1,0,1\}$ $P$ is uniform, $X(a)=a$ for all $a\in\Omega$ and $Y=X^{2}$ doesn't seem like an example, since $Y$ does not formally meet the definition of a random variable.

I'm also unsure of what values I would even sum over in computing $\mathbb{E}[XY]$ for this case. any help would be great.

Best Answer

Your example is fine. $Y$ is a random variable, because $Y$ is a mapping from the probability space to $\mathbb{R}$ given by $Y(\omega)=X^2(\omega)=\omega^2$ for $\omega\in\Omega$. Thus most of the examples you have seen hold.

Related Question