Solved – Likelihood for negative binomial distribution

maximum likelihoodnegative-binomial-distribution

One of the parameterization of the negative binomial distribution is NB($m,r$), $$\Pr(X = k) = \left(\frac{r}{r+m}\right)^r \frac{\Gamma(r+k)}{k! \, \Gamma(r)} \left(\frac{m}{r+m}\right)^k \quad\text{for }k = 0, 1, 2, \dots.
$$ I would like to consider the parameterization NB($m$, $\phi$) where $\phi=1+\frac{m}{r}$. I would like to find the maximum likelihood estimate and its standard error of $m $and $\phi$. Can I just use existing R package which finds the MLE for $m, r$ and just set $\hat{\phi}=1+\hat{m}/\hat{r}$. Or do I have to use some profile likelihood to fix one of the parameters.

Best Answer

Actually I'd like to disagree slightly with a previous answer. Yes it's true that the estimates themselves that come from a MLE are indeed invariant to transformations of the parameters, so it's correct that you can just take $\hat{\phi} = 1 + \hat{m}/\hat{r}$

However, the question also asked about the standard error estimate (or in more complete terms, the covariance matrix) of the estimates $\hat{\phi}$ and $\hat{m}$ as well. Those are not invariant, especially under a non-linear transformation such as this one. To compute a first order approximation to the standard error (or more properly, the covariance matrix) under such a change of variables, you need to compute a Jacobian. Section 2 of this reference describes how to use the Jacobian $J$ in order to transform the uncertainties; essentially $C^{'} = JCJ^{T}$. Alternatively, equations 3, 9, and 17 of this reference also sketch out the basic concept pretty well too. For a non-linear transformation of variables such as the one in this question, approximating it to first order using the Jacobian can sometimes give poor results. If an accurate error estimate is important to you, you may want to explore an unscented transform as well, which is an approximately equivalent numerical framework for performing the same types of calculations.

Related Question