Solved – Central limit theorem and the law of large numbers

central limit theoremconvergencelaw-of-large-numbersnormal distributionprobability

I have a very beginner's question regarding the Central Limit Theorem (CLT):

I am aware that the CLT states that a mean of i.i.d. random variables is approximately normal distributed (for $n \to \infty$, where $n$ is the index of the summands) or the standardized random variable would have a standard normal distribution.

Now the Law of Large Number states roughly speaking that the mean of i.i.d random variables converges (in probability or almost surely) to their expected value.

What I don't understand is: If, as the CLT states, the mean is approximately normally distributed, how then can it also converge to the expected value at the same time?

Convergence would imply for me that with time the probability that the mean takes a value which is not the expected value is almost zero, hence the distribution would not really be a normal but almost zero everywhere except at the expected value.

Any explanation is welcome.

Best Answer

This figure shows the distributions of the means of $n=1$ (blue), $10$ (red), and $100$ (gold) independent and identically distributed (iid) normal distributions (of unit variance and mean $\mu$):

Three overlapping PDFs

As $n$ increases, the distribution of the mean becomes more "focused" on $\mu$. (The sense of "focusing" is easily quantified: given any fixed open interval $(a,b)$ surrounding $\mu$, the amount of the distribution within $[a,b]$ increases with $n$ and has a limiting value of $1$.)

However, when we standardize these distributions, we rescale each of them to have a mean of $0$ and a unit variance: they are all the same then. This is how we see that although the PDFs of the means themselves are spiking upwards and focusing around $\mu$, nevertheless every one of these distributions is still has a Normal shape, even though they differ individually.

The Central Limit Theorem says that when you start with any distribution--not just a normal distribution--that has a finite variance, and play the same game with means of $n$ iid values as $n$ increases, you see the same thing: the mean distributions focus around the original mean (the Weak Law of Large Numbers), but the standardized mean distributions converge to a standard Normal distribution (the Central Limit Theorem).

Related Question