Independence of sum and difference of two independent standard normal random variables. Why

independencenormal distributionorthogonal matricesprobability theoryproof-explanation

Proposition: Let $X_1$ and $X_2$ be independent and normally distributed with zero expectation
and variance $\sigma^2 > 0$. Then $X_1 + X_2$ and $X_1 − X_2$ are independent and normally
distributed with expectation $0$ and variance $2\sigma^2$.

Proof: The vector $(X_1/\sigma,X_2/\sigma)^T$ is standard Gaussian by assumption. Look at
$$A =\begin{pmatrix}
\frac{1}{\sqrt{2}}&
\frac{1}{\sqrt{2}}\\\
\frac{1}{\sqrt{2}}& -\frac{1}{\sqrt{2}}
\end{pmatrix}$$

This is an orthogonal matrix and applying it to our vector yields $((X_1+X_2)/(\sqrt{2}\sigma), (X_1−
X_2)/(\sqrt{2}\sigma))$
, $\color{red}{\text{ which thus must have independent standard normal coordinates}}$.

The fact that $X_1+X_2\sim\mathbb{N}(0,2\sigma^2)$ and $X_1-X_2\sim\mathbb{N}(0,2\sigma^2)$ sounds clear to me.
My doubt regards the $\color{red}{\text{ red part}}$ in the proof:

why, once that I have the vector $((X_1+X_2)/(\sqrt{2}\sigma), (X_1−
X_2)/(\sqrt{2}\sigma))$
, as a consequence of all the provided info such a vector "must have independent standard normal coordinates"? That is – if I properly understand – why can I say that $X_1+X_2\perp \!\!\! \perp X_1-X_2$?

Best Answer

Call the first vector $v$: its joint log-PDF is $-\ln(2\pi)-\tfrac12v\cdot v=-\ln(2\pi)-\tfrac12v_1^2-\tfrac12v_2^2$. The orthogonal transformation preserves this, so it's still linearly separable, implying independence of the new variables. That the PDF is unchanged also establishes $\tfrac{X_1\pm X_2}{\sqrt{2}\sigma}\sim N(0,\,1)$ (note you shouldn't have placed the $\sigma$ under the $\sqrt{}$).