Consistency of estimators by Chebyshev’s inequality

parameter estimationprobabilitystatistics

Here is a follow-up question to a question that I asked before. I will re-introduce the background.

If $\hat{\theta}_n$ is an estimator for the parameter $\theta$, then the two sufficient conditions to ensure consistency of $\hat{\theta}_n$ are:

Bias($\hat{\theta}_n)\to 0$ and Var$(\hat{\theta}_n)\to 0$,

then we will have $\lim_{n\to\infty}Pr(|\hat{\theta}_n-\theta|>\varepsilon)=0, \forall\varepsilon>0$.

Now suppose $X_1,\ldots X_n$ be iid samples drawn from the Binomial$(2,p)$ with unknown parameter $p\in[0,1]$. We have $E(X_1)=2p$ and $Var(X_1)=2p(1-p)$. Let $\hat{p}=\frac{1}{2n}\sum_{i=1}^n X^2_i$ be the estimator for $p$. This is a biased estimator and we cannot use the fact introduced above to conclude consistency. I find that Chebyshev's inequality will make things worse, as $E(\hat{p})\neq p$.

How should I proceed to prove/disprove consistency? Thanks!

For the case where $\hat{p}$ is independent of $n$, say $\hat{p}=X_1+X_2$ referring to the same distribution above, is it possible to discuss on the consistency?

Best Answer

By WLLN, $\hat p=\frac{1}{2n}\sum_{i=1}^n X^2_i$ converges in probability to $$\frac{1}{2}E[X_i^2]=\frac{1}{2}\left(\text{Var}(X_i)+\left(E[X_i]\right)^2\right)=p(1+p).$$

You could also use Chebyshev's inequality to obtain $\frac{1}{2}E[X_i^2]$. In fact, you can prove WLLN quite simply using Chebyshev's inequality in the case of an iid sample under a finite variance.

As for the alternative estimator, $\hat p=X_1+X_2$, there is no dependency here on $n$. Since it is constant in $n$, it simply converges in probability to itself as $n\to \infty$.

Related Question