Solved – Cramér-Rao inequality and MLEs

boundsmaximum likelihoodpoint-estimation

I know that if it exists, a regular, unbiased estimator $T$ for $\tau(\theta)$ attains the Cramér-Rao Lower Bound (next, CRLB) if and only if I can decompose the score function as follows: $S(\theta)=\frac{\partial}{\partial\theta}logf_\mathbf{x}(\mathbf{x};\theta)=k(\theta,n)[T(\mathbf{X})-\tau(\theta)]$, where $k(\theta;n)$ is a generic function.

In particular, what is the link between CRLB and this last property and MLEs ${\hat{\theta}}$? I mean, is it possible that is something like that ${\hat{\theta}}$ always satisfies the decomposition above and thus it always reaches the CRLB?

Best Answer

It's difficult to identify the correct level of rigor for an answer. I added the "regularity" condition to your question, since there are unbiased estimators that beat the Cramer-Rao bound.

Regular exponential families have score functions for parameters that take this linear form. So we have some idea that this notation is not arbitrary; it comes from estimating "usual" things that produce reasonable outcomes.

As you know, obtaining maxima of a functional (like a likelihood or log-likelihood) involves finding the root of its derivative if it's smooth and the root is continuous. For regular exponential families, the linear form means the solution is obtainable in closed form.

When the score has that form, its expectation is 0 and its variance is an information matrix. It was a revelation to me to think of a score as a random variable, but indeed it's a function of $X$. Using the Cauchy–Schwartz inequality, you can show that any biased estimator is the sum of an unbiased estimator and the bias of the original estimator. Therefore the variance is greater in the sum of these two functions.

Related Question