Unbiased Estimators – Difference Between Asymptotic Unbiasedness and Consistency in Mathematical Statistics

biasconsistencyestimatorsmathematical-statisticsunbiased-estimator

Does each imply the other? If not, does one imply the other? Why/why not?

This issue came up in response to a comment on an answer I posted here.

Although google searching the relevant terms didn't produce anything that seemed particularly useful, I did notice an answer on the math stackexchange. However, I thought that this question was appropriate for this site too.

EDIT after reading the comments

Relative to the math.stackexchange answer I was after something more in depth, covering some of the issues dealt with in the comment thread @whuber linked. Also, as I see it the math.stackexchange question shows that consistency doesn't imply asymptotically unbiasedness but doesn't explain much if anything about why. The OP there also takes for granted that asymptotic unbiasedness doesn't imply consistency, and thus the sole answerer so far doesn't address why this is.

Best Answer

In the related post over at math.se, the answerer takes as given that the definition for asymptotic unbiasedness is $\lim_{n\to \infty} E(\hat \theta_n-\theta) = 0$.

Intuitively, I disagree: "unbiasedness" is a term we first learn in relation to a distribution (finite sample). It appears then more natural to consider "asymptotic unbiasedness" in relation to an asymptotic distribution. And in fact, this is what Lehmann & Casella in "Theory of Point Estimation (1998, 2nd ed) do, p. 438 Definition 2.1 (simplified notation):

$$\text{If} \;\;\;k_n(\hat \theta_n - \theta )\to_d H$$

for some sequence $k_n$ and for some random variable $H$, the estimator $\hat \theta_n$ is asymptotically unbiased if the expected value of $H$ is zero.

Given this definition, we can argue that consistency implies asymptotic unbiasedness since

$$\hat \theta_n \to_{p}\theta \implies \hat \theta_n - \theta \to_{p}0 \implies \hat \theta_n - \theta \to_{d}0$$

...and the degenerate distribution that is equal to zero has expected value equal to zero (here the $k_n$ sequence is a sequence of ones).

But I suspect that this is not really useful, it is just a by-product of a definition of asymptotic unbiasedness that allows for degenerate random variables. Essentially we would like to know whether, if we had an expression involving the estimator that converges to a non-degenrate rv, consistency would still imply asymptotic unbiasedness.

Earlier in the book (p. 431 Definition 1.2), the authors call the property $\lim_{n\to \infty} E(\hat \theta_n-\theta) = 0$ as "unbiasedness in the limit", and it does not coincide with asymptotic unbiasedness.

Consistency occurs whenever—

  • the estimator is unbiased in the limit, and
  • the sequence of estimator variances goes to zero (implying that the variance exists in the first place).

These make up a sufficient, but not necessary condition.

For the intricacies related to concistency with non-zero variance (a bit mind-boggling), visit this post.

Related Question