Showing that an estimator is consistent

asymptoticsparameter estimationsamplingsampling-theory

Let $X_1,X_2,\ldots,X_n$ be a random sample from $\mathcal{N}(\theta,1)$. Consider the following (randomized) estimator of $\theta$ given a sample of size $n$:
$$
\hat{\theta}_n = \bar{X} + \begin{cases}
0 & \text{with probability } 1−1/n,\\
n & \text{with probability } 1/n.
\end{cases}
$$

  1. Is $\hat{\theta}_n$ consistent? Prove or disprove.
  2. Is $\hat{\theta}_n$ asymptotically unbiased? Prove or disprove.

Any possible hints??

Best Answer

I can't comment (yet), so I'll add this as an answer.

I will assume that $\bar{X}_n =\frac{1}{n}\sum_{i=1}^n X_i$.

1) In this setting, consistency means that $\hat{\theta}_n\to \theta$ in probability. For a first hint, try looking at the weak law of large numbers: https://en.wikipedia.org/wiki/Law_of_large_numbers and note that (it is easy to prove that) if $(Z_n)$ and $(Y_n)$ are sequences of random variables which converge in probability to $Z$ and $Y$ respectively, then $(Z_n + Y_n)$ converges in probability to $Z+Y$. In your setting it should be easy to show (directly from the definition) that the random variable:

$$W_n = \begin{cases} 0 & \text{with probability } 1 -1/n\\ n & \text{with probability } 1/n \end{cases}$$ converges to 0 in probability. Together, these should allow to answer the question.

2) Asymptotic unbiasedness requires that $\mathbb{E}(\hat{\theta}_n) - \theta \to 0$ as $n\to\infty$. Here, compute $\mathbb{E}(\hat{\theta}_n) - \theta$ and see what you can conclude about it's limit as $n\to\infty$.

Related Question