Solved – Why is asymptotic normality important for an estimator

confidence intervalestimatorsnormality-assumption

Is it because it allows for easy construction for confidence intervals? Isn't it still possible to construct confidence intervals without this property i.e. if it converged to another distribution? Please tell me some reasons why you want an estimator to be asymptotically normal?

Best Answer

Why is asymptotic normality important for an estimator?

I wouldn't say it's important, really, but when it happens, it can be convenient, and the plain fact is, it happens a lot -- for many popular estimators in commonly used models, it is the case that the distribution of an appropriately standardized estimator will be asymptotically normal.

So whether I wish it or not, it happens. [Indeed, in these notes, Charles Geyer says "almost all estimators of practical interest are [...] asymptotically normal", and I think that's probably a fair assessment.]

Is it because it allows for easy construction for confidence intervals?

Well, it does allow easy construction of confidence intervals if the sample sizes are large enough that you could reasonably approximate the sampling distribution as normal. ... as long as you have a computer, or tables, or happen to remember the critical values you want. [Without any of those, it would be mildly inconvenient ... but I can manage okay even if I decide to compute an 85% interval or a 96.5% interval or whatever even if I don't have a computer or tables, since I can take a nearby value I know, or a pair of nearby values either side of the value I want, and do a little bit of playing with a calculator ... or at the worst, a pen and paper, and get an interval that'll be accurate enough; after all, it's already an approximation in at least a couple of different ways, so how accurate do I really need it?]

But I really wouldn't say that "I want asymptotic normality because of that".

I construct finite-sample CIs all the time without bothering with normality. I'm perfectly happy to use a binomial(40,0.5) interval or a $t_{80}$ interval or a $\chi^2_{100}$ interval or an $F_{60,120}$ interval instead of trying to invoke asymptotic normality in any of those cases, so asymptotic-something-else wouldn't have been a big deal. Indeed, I use permutation tests at least sometimes, and generate CIs from permutation or randomization distributions, and I don't give a damn about the asymptotic distribution when I do (since one conditions on the sample, asymptotics are irrelevant).

Isn't it still possible to construct confidence intervals without this property i.e. if it converged to another distribution?

Yes, absolutely. Imagine some scaled estimator was say asymptotically chi-squared with 2df (which is not normal). Would I be bothered? Would it even be mildly inconvenient? Not a bit. (If anything, in some ways that would be easier)

But even if the asymptotic distribution weren't especially convenient, that wouldn't necessarily bother me. For example, I can happily use a Kolmogorov-Smirnov test without difficulty, and the statistic is an estimator of something. It's not convenient in the sense that I could only write down the asymptotic distribution as an infinite sum (but it is convenient in that I just go ahead and use either tables or a computer program to do things with it ... just as I do with the normal).

On the other hand, we needn't (and shouldn't) ignore the fact that the most common kinds of estimator will often be asymptotically normal -- MLEs are usually asymptotically normal, as are method-of-moments estimators and estimators based on (non-extreme) quantiles (and more besides). I'm not going to ignore it when happens.

Please tell me some reasons why you want an estimator to be asymptotically normal?

I don't, especially. But if it happens, I'm happy to use that fact whenever it's convenient and reasonable to do that instead of something else.

Related Question