Solved – Why standard normal samples multiplied by sd are samples from a normal dist with that sd

monte carlonormal distributionsamplesimulation

This answer notes that if a programming language/libraries provide a procedure that returns random samples from a standard normal distribution, we can generate samples from another normal distribution with the same mean by multiplying the samples by the standard deviation $\sigma$ of the desired distribution.

This seems to work. For example, in R, these histograms produced by these two lines of code using the rnorm function, which generates samples from a normal distribution, are visually indistinguishable:

hist(rnorm(100000, sd=0.5), xlim=c(-3,3), breaks=50)
hist(0.5*rnorm(100000),     xlim=c(-3,3), breaks=50)

I don't understand why it works.

In both the normal probability density function and the cumulative distribution function, $\sigma$ appears, squared, in the argument of an exponential function.

Why should simply multiplying by standard deviation turns samples of the standard normal into samples of a distribution with that standard deviation? (It's not surprising that multiplying the standard normal PDF by a constant doesn't produce a PDF of the normal distribution with that standard deviation.)

(If the answer is closely related: For what classes of probability distributions does multiplying samples by a constant generate samples with a distribution whose standard deviation is that multiple of the original distribution's sd?)

Best Answer

Assume that $X$ has a normal distribution with mean $\mu=0$ and variance $\sigma^2$. Then the probability density function (pdf) of the random variable $X$ is given by:

\begin{eqnarray*} f_X(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{x^{2}}{2\sigma^{2}}} \end{eqnarray*}

for $-\infty<x<\infty$ and $\sigma>0$. Now, when $Z$ has a standard normal distribution, $\mu=0$ and $\sigma^2=1$, so, it's pdf is given by:

\begin{eqnarray*} f_Z(z)=\frac{1}{\sqrt{2\pi}}e^{-\frac{x^{2}}{2}} \end{eqnarray*}

for $-\infty<z<\infty$. If we then multiply $Z$ by the standard deviation $\sigma$ and let that be equal to the function $g(Z)$, (i.e. $Y=g(Z)=\sigma Z$) we can use the formula for transforming functions of random variables (see Casella and Berger (2002), Theorem 2.1.8): \begin{eqnarray*} {f_Y(y)=f_Z(z)(g^{-1}(y))}{{d\over{dy}}{g^{-1}(y)}} \end{eqnarray*} First we find $Z=g^{-1}(y)={y\over{\sigma}}$ and ${d\over{dy}}{g^{-1}(y)}={1\over{\sigma}}$.

So, substituting these terms, we have:

\begin{eqnarray*} f_{Y}(y) & = & f_{Z}\left(g^{-1}(y)\right){\frac{d}{{dy}}{g^{-1}(y)}}\\ & = & f_{Z}\left(\frac{y}{\sigma}\right)\frac{1}{\sigma}\\ & = & \frac{1}{{\sqrt{2\pi}}}{e^{-}\frac{\left(\frac{y}{\sigma}\right)^{2}}{{2}}}\left(\frac{1}{\sigma}\right)\\ & = & \frac{1}{{\sqrt{2\pi}}}\left(\frac{1}{\sigma}\right){e^{-}\frac{y^2}{{2\sigma^{2}}}}\\ & = & \frac{1}{{\sqrt{2\pi}\sigma}}{e^{-\frac{y^2}{{2\sigma^{2}}}}} \end{eqnarray*}

This PDF is identical to the PDF of $f_X$ given at the beginning of the proof which is simply the pdf of a normal random variable with mean $\mu=0$ and variance $\sigma^2$. Hence, $Y\sim N(0, \sigma^2)$. So if you look closely back through the proof, you'll see that the squared $\sigma$ exponent term is introduced through the original squared $x$ term via composite functions with the inner function being the inverse of transformation. So this is how multiplying by $\sigma$ introduces $\sigma^2$ into the the pdf.