Asymptotic Normality – Why Do Statisticians Prove It?

asymptoticsnormality-assumption

In many statistics papers, authors suggest a new data analysis methodology and prove its properties such as consistency or asymptotic normality. I think it's a kind of tradition or custom.
I understand that consistency is important, but I don't understand why asymptotic normality is so important.

It is large sample property. (written 'large', read 'infinite')
In real data analysis, we never have infinite sample.
Even though the estimator is asymptotically normal, its distribution in a realistic sample size may be far from normal distribution.

Best Answer

It is for example useful to do so in order to be able to quantify the sampling uncertainty of an estimator, or the null distribution of a test.

Recall that normal random variables take 95% of their realizations in the interval $\mu\pm1.96\sigma$. So if you can demonstrate that (typically, a scaled version of) an estimator is asymptotically normal, then you know it behaves normally at least in large samples, so you can easily construct confidence intervals, for example.

Whether or not the approximation is useful to settings in which (as always in practice) your sample is finite is in general unfortunately indeed not known analytically - if could derive the finite-sample distribution analytically, that is what we would work with. Unfortunately, that only works in very rare cases (for example, when sampling from a normal distribution, the t-statistic follows a t distribution).

Typically, simulations are then used to at least get an idea of the usefulness of the approximation in relevant cases.

Related Question