The proof that the Expected value of squared sample mean is equal to variance divided by n plus squared mu

expected valuestatistics

I understand that the expected value of any squared random variable is equal to squared population mean plus population variance – for instance here:

Where did this statistics formula come from: $E[X^2] = \mu^2 + \sigma^2$

This gives me an intuitive understanding that the expected value of squared sample mean is equal to variance/n plus squared mu. However, is there any mathematical proof? preferred is non-calculus one

Best Answer

This is not necessarily true if the observations $\ X_1,$$\,X_2,$$\,\dots,$$\,X_n\ $ are correlated. Here's a proof for the case when they're pairwise uncorrelated.. \begin{align} E&\left(\left(\frac{1}{n} \sum_\limits{i=1}^nX_i\right)^2\right)=E\left(\left(\mu+\frac{1}{n}\sum_\limits{i=1}^n(X_i-\mu)\right)^2\right)\\ &=E\left(\mu^2+\frac{2\mu}{n}\sum_\limits{i=1}^n(X_i-\mu)+\frac{1}{n^2}\sum_\limits{i=1}^n\sum_\limits{j=1}^n(X_i-\mu)(X_j-\mu)\right)\\ &=\mu^2+0+\frac{1}{n^2}\sum_\limits{i=1}^n\sum_\limits{j=1}^nE\big((X_i-\mu)(X_j-\mu)\big)\\ &=\mu^2+\frac{1}{n^2}\sum_\limits{i=1}^n\sigma^2\\ &=\mu^2+\frac{\sigma^2}{n}\ . \end{align} The pairwise uncorrelatedness of the $\ X_i\ $ and $\ X_j\ $ is used in the proof where $\ E\big((X_i-\mu)(X_j-\mu)\big)\ $ is set to zero for $\ i\ne j\ $. Of course, the same identity holds a fortiori if $\ X_1,$$\,X_2,$$\,\dots,$$\,X_n\ $ are independent.

Related Question