Solved – Estimate the variance of MLE

hypothesis testingmaximum likelihood

From Wikipedia

Under the Wald statistical test, the maximum likelihood estimate $\hat\theta$ of the parameter(s) of interest $\theta$ is compared with the proposed value $\theta_0$, … In the univariate case, the Wald statistic is
$$
\frac{ ( \widehat{ \theta}-\theta_0 )^2 }{\operatorname{var}(\hat \theta )}
$$

where $\operatorname{var}(\widehat\theta)$ is the variance of the maximum likelihood estimate.

My question is about how to estimate $\operatorname{var}(\widehat\theta)$, i.e. the variance of the MLE $\hat{\theta}$?

  1. The same article says

    A reasonable estimate of the variance for the MLE can be given by $\frac{1}{I_n(\widehat\theta)}$ , where $I_n$ is the Fisher
    information of the parameter.

  2. I am thinking of another way. Since
    $\operatorname{var}(\widehat\theta)$ is $f(n, \theta, \eta)$, i.e.
    some function of the sample size $n$ and the true value $\theta$ of
    the parameter of interest and some other parameter $\eta$ (because
    $\theta$ alone may not be fully able to specify the distribution of
    the sample), can we first estimate the MLEs of $\theta$ and $\eta$
    as $\hat{\theta}$ and $\hat{\eta}$, and then will $f(n,
    \hat{\theta}, \hat{\eta})$ be the MLE of
    $\operatorname{var}(\widehat\theta)$ because of the equivariance of
    MLE?
  3. Will the above two methods be the same (in some cases, such as when
    the statistical model of the sample is normal distributions?)

Thanks and regards!

Best Answer

The method you mention in 2 is similar to the way the variance is usually estimated--through the observed information. The problem with the Fisher Information is that it depends upon an unknown parameter, $\theta$. Using the plug-in estimator defined in the link I specified will allow you to compute an estimate of the Fisher Information. The second Wikipedia quote you mention is equivalent to (2.).

Related Question