How are joint probability distributions constructed from product measures

measure-theoryprobability theoryproductsstatistics

I often see a construction in measure theory in regards to product measures. This is outlined below (taken from Wikipedia because it's very generic)

Let $ (X_{1},\Sigma _{1})$ and $(X_{2},\Sigma _{2})$ be two measurable
spaces, that is, $\Sigma _{1} $ and $\Sigma _{2}$ are sigma algebras
on $X_{1}$ and $X_{2}$ respectively, and let $\mu _{1} $and $\mu _{2}$
be measures on these spaces. Denote by $\Sigma _{1}\otimes \Sigma_{2}$ the sigma algebra > on the Cartesian product $X_{1}\times X_{2} $ generated by subsets of the form $B_{1}\times B_{2}$, where $B_{1}\in
> \Sigma _{1}$
and $B_{2}\in \Sigma _{2}.$ This sigma algebra is called
the tensor-product σ-algebra on the product space. A product measure
$\mu _{1}\times \mu _{2}$ is defined to be a measure on the measurable
space $(X_{1}\times X_{2},\Sigma _{1}\otimes \Sigma _{2})$ satisfying
the property $(\mu _{1}\times \mu _{2})(B_{1}\times B_{2})=\mu
> _{1}(B_{1})\mu _{2}(B_{2})$
for all $B_{1}\in \Sigma _{1},\ B_{2}\in \Sigma _{2}.$

Questions:

  1. From the perspective of probability theory, this product measure construction looks a lot like the construction of a joint probability $P(X,Y)$ where the $X$ and $Y$ are independent. Is this assumption correct?

  2. If (1.) is correct, then how does the notion of correlation come into the structure of product measures? How is correlation built into measure theory so that it can pass onto probability theory?

Best Answer

You're correct that the statement of independence is, that if $X$ and $Y$ are (real-valued) random variables on a common probability space with respective distributions $X(P)$ and $Y(P)$, then $X$ and $Y$ are said to be independent exactly if the distribution of $(X,Y)$ (which is a random vector and thus, its distribution is a measure on $\mathbb{R}^2$) is the product measure $X(P)\otimes Y(P)$.

2.

Correlation is not inherent to product measures since these are exactly the distributions of independent random variables - in particular, uncorrelated ones. However, when $X$ and $Y$ are not independent, then $(X,Y)(P)$ is not a product measure, but some other measure on $\mathbb{R}^2$. This allows for the covariance $$ \int_{\mathbb{R}^2} xy\; \textrm{d}(X,Y)(P)-\int_{\mathbb{R}^2} xy\;\textrm{d}X(P)\otimes Y(P) $$ to be non-trivial.

3.

About the relation between product measures and other measures.

One way that general measures do pop up from product measures is via conditional distributions (see, for instance, https://en.wikipedia.org/wiki/Regular_conditional_probability). If you have a good notion that "conditional on $Y=y$, then $X$ follows the distribution $\mu_y$," then you have a scheme as follows: Let $(U,Y)$ be an independent pair such that $Y$ is still $Y$ and $U$ is uniform on $[0,1]$. Then, if $F_y$ is the cumulative distribution function of $\mu_y$ with right-continuous generalised inverse $F_y^{-1}$, we have that $F_y^{-1}(U)$ follows the distribution $\mu_y$.

Therefore, given such a pair $(U,Y)$, the transformation $(U,Y)\mapsto (F_Y(U),Y)$ yields a variable with the same distribution as $(X,Y)$. Thus, the latter distribution (which is not a product measure) can be constructed from the former (which is).

Related Question