Covariance between random variables

covarianceprobability

Analyzing math theory of random variables. Something fishy messes with my logic. Two questions:

  1. When we have two random variables $X$ and $Y$, to get covariance $Cov (X, Y)$ is basically to
    get how the values of X and Y move relative to each other. Is this true?

  2. Because each random variable may generate potentially infinite sets of values, how can we determine the covariance over the infinite sets?

Just to recall the definition I found:

$Cov(X,Y)=\mathbb E\big[(X-\mathbb E[X])(Y-\mathbb E[Y])\big]=\mathbb E[XY]-(\mathbb E[X])(\mathbb E[Y])$

Uses expected values to define the covariance. This looks OK.

Best Answer

1 - Not quite. The covariance (more specifically, the correlation coefficient), measures how much the variables "depend" on each other linearly. A great correlation coefficient shows us that one variable is "almost" a linear function of the other, I.e. there is $a$ and $b$ such that $Y \approx aX+b$. If two random variables are uncorrelated, that does not mean that here is no relation of dependence between them - just that this relation cannot be a linear one.

2 - Well, you just anwered this in your question: use the expectations! We can compute expectations of random variables even if they assume an uncountable number of values.