[Math] Proving normal distribution in Box Muller

normal distributionprobabilityprobability distributionsprobability theory

I'm working on the following exercise from Klenke:

Let $U$ and $V $ be independent random variables that are uniformly distributed on $[0,1]$. Define $$X := \sqrt{-2\log U}\cos(2\pi V)\quad\textrm{and}\quad Y := \sqrt{-2\log(U)}\sin(2\pi V)$$ Show that $X$ and $Y$ are independent and $\mathcal N_{0,1}$-distributed.

If we let $f_X$ and $f_Y$ be the densities of $X$ and $Y$, I think you need to prove 2 things: that $f_{X,Y} = f_X f_Y$, where $f_{X,Y}$ is the joint density; and that $f_X(x) = e^{-x^2/2}/\sqrt{2\pi}$ and $f_Y = e^{-y^2/2}/\sqrt{2\pi}$.

Here's what I've tried: Let $R = \sqrt{-2\log U}$ and $\Theta = 2\pi V$, which are independent (easy to prove). One can show that the density of $R$ is $f_R(r) = re^{-r^2/2}$ and the density of $\Theta$ is $f_\Theta(\theta) = \theta/2\pi$. So the joint density of $R$ and $\Theta$ is $f_{R,\Theta}(r,\theta) = re^{-r^2/2}/2\pi$. So the joint distribution is $$F_{R,\Theta}(r,\theta) = \int_0^r \int_0^\theta f_\Theta(\theta)f_R(r)\:d\theta\:dr = \int_0^\theta \int_0^r \frac 1{2\pi}re^{-r^2/2}\:dr\:d\theta $$ Let $\phi : (0, \infty) \times (0,\pi) \to \mathbb R^2$ be defined by $\phi(r,\theta) = (r\cos\theta, r\sin\theta)$, which is a differentiable bijection on a set of full $\mathbb P_{R,\Theta}$-measure. It's straightforward to compute that $$\mathbb P_{X,Y}((-\infty, a] \times (-\infty, b]) = \mathbb P_{R,\Theta}\left(\phi^{-1}((-\infty, a] \times (-\infty, b])\right)$$ for any $a,b \in \mathbb R$, so $\mathbb P_{X,Y} = \mathbb P_{R,\Theta} \circ \phi^{-1}$. One can further show that $$\left| \phi'\circ\phi^{-1}(x,y)\right| = \sqrt{x^2+y^2}$$ so by the transformation formula, we get $$f_{X,Y} = \frac 1{2\pi} e^{-\left(x^2+y^2\right)/2} = \frac{1}{\sqrt{2\pi}} e^{-x^2/2} \times \frac{1}{\sqrt{2\pi}} e^{-y^2/2}$$

My question: If we were somehow able to show $f_X(x) = e^{-x^2/2}/\sqrt{2\pi}$ and $f_Y = e^{-y^2/2}/\sqrt{2\pi}$, we'd be done at this point. But it's not clear to me why this should necessarily be true. Is there an obvious reason for this? Why do we now know $X$ and $Y$ are independent and normally distributed?

Best Answer

Well, you have done it all.

Just note that the pdf you got $f_{X,Y}$ is the density of a bivariate normal distribution with mean $\mu=(0,0)'$ and variance covariance matrix given by

$V=\left[\array{ 1 & 0\\ 0 & 1}\right]$

Hence, since the covariance is $0$ you have that $X$ and $Y$ are independent since they are normally distributed.