Solved – When can’t Cramer-Rao lower bound be reached

efficiencyestimationestimatorspoint-estimationunbiased-estimator

The Cramer-Rao lower bound (CRLB) gives the minimum variance of an unbiased estimator. One sentence in the wiki page says "However, in some cases, no unbiased technique exists which achieves the bound. This may occur either if for any unbiased estimator, there exists another with a strictly smaller variance, or if an MVU estimator exists, but its variance is strictly greater than the inverse of the Fisher information."

Could anyone give me examples of the two situations that CRLB can't be reached? i.e.,

(1) There is always unbiased estimator with a smaller variance

(2) MVU estimator exists, but its variance is greater than CRLB.

Best Answer

There are several instances of (2), namely the case where the variance of a UMVU estimator exceeds the Cramer-Rao lower bound. Here are some common examples:

  • Estimation of $e^{-\theta}$ when $X_1,\ldots,X_n$ are i.i.d $\mathsf{Poisson}(\theta)$:

Consider the case $n=1$ separately. Here we are to estimate the parametric function $e^{-\theta}=\delta$ (say) based on $X\sim\mathsf{Poisson}(\theta) $.

Suppose $T(X)$ is unbiased for $\delta$.

Therefore, $$E_{\theta}[T(X)]=\delta\quad,\forall\,\theta$$

Or, $$\sum_{j=0}^\infty T(j)\frac{\delta(\ln (\frac{1}{\delta}))^j}{j!}=\delta\quad,\forall\,\theta$$

That is, $$T(0)\delta+T(1)\delta\cdot\ln\left(\frac{1}{\delta}\right)+\cdots=\delta\quad,\forall\,\theta$$

So we have the unique unbiased estimator (hence also UMVUE) of $\delta(\theta)$:

$$T(X)=\begin{cases}1&,\text{ if }X=0 \\ 0&,\text{ otherwise }\end{cases}$$

Clearly,

\begin{align} \operatorname{Var}_{\theta}(T(X))&=P_{\theta}(X=0)(1-P_{\theta}(X=0)) \\&=e^{-\theta}(1-e^{-\theta}) \end{align}

The Cramer-Rao bound for $\delta$ is $$\text{CRLB}(\delta)=\frac{\left(\frac{d}{d\theta}\delta(\theta)\right)^2}{I(\theta)}\,,$$

where $I(\theta)=E_{\theta}\left[\frac{\partial}{\partial\theta}\ln f_{\theta}(X)\right]^2=\frac1{\theta}$ is the Fisher information, $f_{\theta}$ being the pmf of $X$.

This eventually reduces to $$\text{CRLB}(\delta)=\theta e^{-2\theta}$$

Now take the ratio of variance of $T$ and the Cramer-Rao bound:

\begin{align} \frac{\operatorname{Var}_{\theta}(T(X))}{\text{CRLB}(\delta)}&=\frac{e^{-\theta}(1-e^{-\theta})}{\theta e^{-2\theta}} \\&=\frac{e^{\theta}-1}{\theta} \\&=\frac{1}{\theta}\left[\left(1+\theta+\frac{\theta^2}{2}+\cdots\right)-1\right] \\&=1+\frac{\theta}{2}+\cdots \\&>1 \end{align}

With exactly same calculation this conclusion holds here if there is a sample of $n$ observations with $n>1$. In this case the UMVUE of $\delta$ is $\left(1-\frac1n\right)^{\sum_{i=1}^n X_i}$ with variance $e^{-2\theta}(e^{\theta/n}-1)$.

  • Estimation of $\theta$ when $X_1,\ldots,X_n$ ( $n>1$) are i.i.d $\mathsf{Exp}$ with mean $1/\theta$:

Here UMVUE of $\theta$ is $\hat\theta=\frac{n-1}{\sum_{i=1}^n X_i}$, as shown here.

Using the Gamma distribution of $\sum\limits_{i=1}^n X_i$, a straightforward calculation shows $$\operatorname{Var}_{\theta}(\hat\theta)=\frac{\theta^2}{n-2}>\frac{\theta^2}{n}=\text{CRLB}(\theta)\quad,\,n>2$$

Since several distributions can be transformed to this exponential distribution, this example in fact generates many more examples.

  • Estimation of $\theta^2$ when $X_1,\ldots,X_n$ are i.i.d $N(\theta,1)$:

The UMVUE of $\theta^2$ is $\overline X^2-\frac1n$ where $\overline X$ is sample mean. Among other drawbacks, this estimator can be shown to be not attaining the lower bound. See page 4 of this note for details.

Related Question