Solved – Need help finding UMVUE for a Poisson Distribution

mathematical-statisticsself-studyumvue

My fellow classmates and I are stuck on a homework problem that is a three part problem to find the UMVUE of a Poisson distribution.

The problem goes like this:

Let X ~ Pois$(\lambda$), and we want to estimate $\theta=e^{-\lambda}$.

a) Find the Rao-Cramer lower bound for an unbiased estimator for $\theta$ (based on a sample of size n). Hint: The differentiation needs to be with respect to $\theta$, not $\lambda$.

So the majority of us used the Fisher Information Index to get:

$I_1(\theta) = \Sigma ((\frac{d}{d\theta}ln(e^{-\lambda}\frac{\lambda^x}{x!}))^2*e^{-\lambda}\frac{\lambda^x}{x!})$

We then substituted in our ln $e^{-\lambda}$ for $\theta$.

$I_1(\theta) = \Sigma ((\frac{d}{d\theta}ln(\theta\frac{\lambda^x}{x!}))^2*e^{-\lambda}\frac{\lambda^x}{x!})$

After that, we broke out the ln function:

$I_1(\theta) = \Sigma ((\frac{d}{d\theta}ln(\theta) + ln(\lambda^x) – ln(x!))^2*e^{-\lambda}\frac{\lambda^x}{x!})$

Then we took the derivative with respect to $\theta$.

$I_1(\theta) = \Sigma ((\frac{1}{\theta})^2*e^{-\lambda}\frac{\lambda^x}{x!})$

This is where we have a little difference of opinion because one of us came up with the final answer as being:

$\frac{ne^{\lambda}}{\theta}$

He took the final answer by cancelling one of the $\theta$s and then using the Taylor series to get $e^{\lambda}$

Where two of us got the answer as being:

$\frac{n}{\theta^2}$

We don't know which is right because either one could make sense.

If someone could please let us know on this, we would be grateful.

Part b we are also stuck on as well.

b) Show that $T=\Sigma X_i$ is a complete sufficient statistics for $\theta=e^{-\lambda}$

Now, from our notes, we have that a probability distribution belongs to an exponential family if it can be written in the form $e^{A(x)B(\theta)+C(X)+D(\theta)}$

So we start out and we get:

$e^{-\lambda}\frac{\lambda^x}{x!}$
$=e^{-\lambda}e^{ln(\frac{\lambda^x}{x!})}$
$=e^{-\lambda}e^{xln(\lambda)-ln(x!)}$

This is where we get stuck because we are not sure 1) where to put the T into this equation and 2) how are we supposed to relate this back to the theta.

So far, we have only come up with:

$=e^{ln(\theta+\Sigma X_i ln(\lambda)-ln(\Sigma X_i!)}$

We know we are probably completely and totally wrong on this.

When it comes to part c, of our equation, we are a bit stuck as to what to do.

Part c goes like:

If we use $T^* = (\frac{n-1}{n})^{\Sigma X_i}$ as an estimator for $\theta=e^{-\lambda}$, as the Rao-Blackwell Theorem indicated, combine your findings in Parts (a) and (b) to determine Var($T^*$).

We know that this is a UVMUE that can probably be solved by the Lehmann-Scheffe theorem.

The thing is that we just don't know how to really apply it or how we can use what we find from parts a and b to get Var($T^*$).

If anyone can help out with any parts of this problem, we will be very grateful because we are very, very, very stuck on this.

Best Answer

(a) As I mentioned in comment, you should focus on the parameter of interest $\theta$, it is not good to write some formulas contain both $\theta$ and $\lambda$. Following this, it is routine to get the log-likelihood (denote $\sum X_i$ by $T$ and omit terms which don't contain $\theta$) : $$\ell(\theta) = n\log\theta + T\log(-\log\theta)$$ Therefore, $$\ell'(\theta) = \frac{n}{\theta} + \frac{T}{\theta\log\theta}$$ $$\ell''(\theta) = -\frac{n}{\theta^2} - \frac{T(\log\theta + 1)}{(\theta\log\theta)^2}$$ As $E_\theta(T) = -n\log\theta$, it follows that the Fisher information is $$I(\theta) = -E_\theta(\ell''(\theta)) = -\frac{n}{\theta^2\log\theta}$$ Hence the C-R lower bound is given by $1/I(\theta) = \boxed{-\theta^2\log(\theta)/n}$. Since $\theta = P_\theta(X_i = 0)$, an unbiased estimate of $\theta$ could be the sample proportion which takes 0, namely, $$\hat{\theta} = \frac{\sum_{i = 1}^n \mathrm{I}\{X_i = 0\}}{n}.$$ Or even simpler, just take $$\hat{\theta} = \mathrm{I}\{X_1 = 0\}.$$

(b) This is routine application of Fisher's factorization theorem and the property of exponential family. Not hard if you write things clearly.

(c) You may verify that (recall $T \sim \mathrm{Poisson}(n\lambda)$, and here would be easier to go back to work with $\lambda$ but keep in mind $\theta$ and $\lambda$ is one-to-one correspondence): $$E_\theta(T^*) = \sum_{k = 0}^\infty \left(1 - \frac{1}{n}\right)^k e^{-n\lambda}\frac{(n\lambda)^k}{k!} = e^{-n\lambda}e^{n\lambda(1 - n^{-1})} = \theta.$$ Hence $T^*$ can be (also) used as an unbiased estimator of $\theta$ (in fact, it is the Rao-Blackwellization of $\hat{\theta}$, see the last paragraph of this answer).

In addition, \begin{align*} E_\theta((T^*)^2) = \sum_{k = 0}^\infty \left(1 - \frac{1}{n}\right)^{2k} e^{-n\lambda}\frac{(n\lambda)^k}{k!} = e^{-n\lambda}e^{n\lambda(1 - n^{-1})^2} = \theta^{2 - \frac{1}{n}}. \end{align*} It then follows that $$\mathrm{Var}_\theta(T^*) = \theta^{2 - \frac{1}{n}} - \theta^2.$$

One may verify that we indeed have the C-R lower bound is strictly less (as @81235 pointed out in the comment) than $\mathrm{Var}_\theta(T^*)$, as the consequence of the famous inequality \begin{align*} e^{\frac{\lambda}{n}} > 1 + \frac{\lambda}{n}. \end{align*}

On the other hand, $\mathrm{Var}_\theta(T^*)$ is indeed not bigger than $$\mathrm{Var}_\theta(\hat{\theta}) = P(X_1 = 0)(1 - P(X_1 = 0)) = e^{-\lambda}(1 - e^{-\lambda}) = \theta - \theta^2.$$ Together, we observe that

  1. $T^*$ is the UMVUE for $\theta$ (as the Rao-Blackwell Theorem guarantees);
  2. The UMVUE does not necessarily achieve the C-R lower bound.

Part (c) relates to part (a) and part (b) in "as Rao-Blackwell Theorem indicated", which is worthwhile to elaborate. Basically, we want to show that the Rao-Blackwellized estimate $E(\hat{\theta} \mid T)$ based on the the estimate $\hat{\theta}$ proposed in part (a) exactly yields $T^*$. Indeed, applying the classical distributional result \begin{align*} X_1 \mid X_1 + \cdots + X_n = t \sim \text{Binom}(t, n^{-1}), \end{align*} it follows that \begin{align*} E(\hat{\theta} \mid T) = P(X_1 = 0 \mid T) = (1 - n^{-1})^T = T^*, \end{align*} hence the form of $T^*$.