Solved – Counterexample for the sufficient condition required for consistency

consistencymathematical-statisticsprobabilityunbiased-estimatorvariance

We know that if an estimator is an unbiased estimator of theta and if its variance tends to 0 as n tends to infinity then it is a consistent estimator for theta. But this is a sufficient and not a necessary condition. I am looking for an example of an estimator which is consistent but whose variance does not tend to 0 as n tends to infinity. Any suggestions?

Best Answer

Glad to see that my (incorrect) answer generated two more, and turned a dead question into a lively Q&A thread. So it's time to try to offer something worthwhile, I guess).

Consider a serially correlated, covariance-stationary stochastic process $\{y_t\},\;\; t=1,...,n$, with mean $\mu$ and autocovariances $\{\gamma_j\},\;\; \gamma_j\equiv \operatorname{Cov}(y_t,y_{t-j})$. Assume that $\lim_{j\rightarrow \infty}\gamma_j= 0$ (this bounds the "strength" of autocorrelation as two realizations of the process are further and further away in time). Then we have that

$$\bar y_n = \frac 1n\sum_{t=1}^ny_t\rightarrow_{m.s} \mu,\;\; \text{as}\; n\rightarrow \infty$$

i.e. the sample mean converges in mean square to the true mean of the process, and therefore it also converges in probability: so it is a consistent estimator of $\mu$.

The variance of $\bar y_n$ can be found to be

$$\operatorname{Var}(\bar y_n) = \frac 1n \gamma_0+\frac 2n \sum_{j=1}^{n-1}\left(1-\frac {j}{n}\right)\gamma_j$$

which is easily shown to go to zero as $n$ goes to infinity.

Now, making use of Cardinal's comment let's randomize further our estimator of the mean, by considering the estimator

$$\tilde \mu_n = \bar y_n + z_n$$

where $\{z_t\}$ is an stochastic process of independent random variables which are also independent from the $y_i$'s, taking the value $at$ (parameter $a>0$ to be specified by us) with probability $1/t^2$, the value $-at$ with probability $1/t^2$, and zero otherwise. So $\{z_t\}$ has expected value and variance

$$E(z_t) = at\frac 1{t^2} -at\frac 1{t^2} + 0\cdot \left (1-\frac 2{t^2}\right)= 0,\;\;\operatorname{Var}(z_t) = 2a^2$$

The expected value and the variance of the estimator is therefore

$$E(\tilde \mu) = \mu,\;\;\operatorname{Var}(\tilde \mu) = \operatorname{Var}(\bar y_n) + 2a^2$$

Consider the probability distribution of $|z_n|$, $P\left(|z_n| \le \epsilon\right),\;\epsilon>0$: $|z_n|$ takes the value $0$ with probability $(1-2/n^2)$ and the value $an$ with probability $2/n^2$. So

$$P\left(|z_n| <\epsilon\right) \ge 1-2/n^2 = \lim_{n\rightarrow \infty}P\left(|z_n| < \epsilon\right) \ge 1 = 1$$

which means that $z_n$ converges in probability to $0$ (while its variance remains finite). Therefore

$$\operatorname{plim}\tilde \mu_n = \operatorname{plim}\bar y_n+\operatorname{plim} z_n = \mu$$

so this randomized estimator of the mean value of the $y$-stochastic process remains consistent. But its variance does not go to zero as $n$ goes to infinity, neither does it go to infinity.

Closing, why all the apparently useless elaboration with an autocorrelated stochastic process? Because Cardinal qualified his example by calling it "absurd", like "just to show that mathematically, we can have a consistent estimator with non-zero and finite variance".
I wanted to give a hint that it isn't necessarily a curiosity, at least in spirit: There are times in real life that new processes begin, man-made processes, that had to do with how we organize our lives and activities. While we usually have designed them, and can say a lot about them, still, they may be so complex that they are reasonably treated as stochastic (the illusion of complete control over such processes, or of complete a priori knowledge on their evolution, processes that may represent new ways to trade or produce, or arrange the rights-and-obligations structure between humans, is just that, an illusion). Being also new, we do not have enough accumulated realizations of them in order to do reliable statistical inference on how they will evolve. Then, ad hoc and perhaps "suboptimal" corrections are nevertheless an actual phenomenon, when for example we have a process where we strongly believe that its present depends on the past (hence the auto-correlated stochastic process), but we really don't know how as yet (hence the ad hoc randomization, while we wait for data to accumulate in order to estimate the covariances). And maybe a statistician would find a better way to deal with such kind of severe uncertainty -but many entities have to function in an uncertain environment without the benefit of such scientific services.

What follows is the initial (wrong) answer (see especially Cardinal's comment)

Estimators that converge in probability to a random variable do exist: the case of "spurious regression" comes to mind, where if we attempt to regress two independent random walks (i.e. non-stationary stochastic processes) on each other by using ordinary least squares estimation, the OLS estimator will converge to a random variable.

But a consistent estimator with non-zero variance does not exist, because consistency is defined as the convergence in probability of an estimator to a constant, which, by conception, has zero variance.