Generalized Linear Models – Variance Functions for Poisson and Negative Binomial Distributions

generalized linear modelpoisson-regressionvariance

I'm having some trouble understanding how the variance functions of the Poisson or negative binomial tie in to the standard errors on the coefficients. I'm mentioning 2 models because I'm not sure if things work the same in both of them.

Say both the Poisson and negative binomial models have $\beta_0$: intercept, $\beta_1$: sex (where 1 is female and 0 is male).

The variance function for the Poisson is $\sigma^2=\lambda$. The variance function for the negative binomial is $\sigma^2=\mu+\frac{1}{\theta}\mu^2$, where $\theta$ is the scale parameter.

What I understand is that you can use the variance function to infer the variance in the groups of your data. Of course, you have your own data, so you could just take the variance of it, but you can also see what the model predicts for the variance. For example, in the Poisson model, the mean of the "female" group is $e^{\beta_0+\beta_1}$, and let's say you have the estimated coefficients, so you can just solve for the mean, which is equal to the variance. Similarly, the variance of the male group would be $e^{\beta_0}$.

For the negative binomial, I believe it's the same, but using the variance function for the negative binomial, the variance of the "female" group would be estimated as $e^{\beta_0+\beta_1}+\frac{1}{\theta}\cdot (e^{\beta_0+\beta_1})^2$, where you know the scale parameter $\theta$.

Now, how does the variance function relate to the standard error of the coefficients (e.g. $\beta_1$)? I've been trying to figure this out for a while. Is it possible to calculate the standard errors from the R output (i.e. given the coefficients and the scale parameter for the negative binomial case)? For a Poisson model, what is $\lambda$ for the standard error of a coefficient, and what is $\theta$ for the standard error of a coefficient in the negative binomial?

Any help would be appreciated!

Best Answer

Is it possible to calculate the [coefficient] standard errors from the R output?

It depends on what you mean by "the R output." It's not conceptually different for generalized linear models or ordinary least squares models. Maybe it's simplest to start thinking about this in the context of ordinary least squares regression. Consider the example from the lm() help page:

ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- gl(2, 10, 20, labels = c("Ctl","Trt"))
weight <- c(ctl, trt)
lm.D9 <- lm(weight ~ group)

If you then type lm.D9 at the prompt, you get the coefficient estimates

lm.D9

# Call:
# lm(formula = weight ~ group)
# 
# Coefficients:
# (Intercept)     groupTrt  
#       5.032       -0.371  

If that's all you have to work with you can't get the standard errors of the coefficients even in this simple case. The standard errors of the coefficients are the square roots of the diagonal elements of the coefficient variance-covariance matrix. As this page (among many others) shows, that matrix is $\hat \sigma^2 (X'X)^{-1}$, where $\hat \sigma^2 $ is the estimated error variance and $X$ is the design matrix that submits the data to the model. Even for an intercept-only model the standard error of the mean isn't $\sqrt {\hat \sigma^2}$ by itself, as one might infer from the question as stated. It's $\sqrt {\hat \sigma^2/n}$, where $n$ is the number of observations.

R models typically allow for a vcov() function to report that matrix:

vcov(lm.D9)
#             (Intercept)    groupTrt
# (Intercept)  0.04849583 -0.04849583
# groupTrt    -0.04849583  0.09699167

sqrt(diag(vcov(lm.D9)))
# (Intercept)    groupTrt 
#   0.2202177   0.3114349 

For many model types (like this one) you can get more information simply by asking for a summary() that directly reports the standard errors along with the coefficient estimates (not shown).

The principle is the same for generalized linear models like Poisson that are fit by maximum likelihood estimation (MLE). As this answer says:

the estimated standard errors of the MLE are the square roots of the diagonal elements of the inverse of the observed Fisher information matrix. In other words: The square roots of the diagonal elements of the inverse of the Hessian (or the negative Hessian) are the estimated standard errors.

That is, the variance-covariance matrix of the coefficient estimates is the inverse of the matrix of second derivatives of the log-likelihood with respect to the covariate values at the final solution. For ordinary linear regression under the classic assumption of independent, normally distributed errors having constant variance, that ends up identical to the formula above. For other generalized linear models, however, there is not typically a closed-form solution and the entire variance-covariance matrix must be estimated from the combination of the model and the data.

A Poisson model assumes that the variance around any estimate equals the mean. So the formula for the likelihood used to fit the model, where it includes a parameter $\lambda$ for the mean, inherently also includes the variance $\lambda$. For the Poisson example of the glm() help page:

counts <- c(18,17,15,20,10,20,25,13,12)
outcome <- gl(3,1,9)
treatment <- gl(3,3)
glm.D93 <- glm(counts ~ outcome + treatment, family = poisson())
glm.D93
#
# Call:  glm(formula = counts ~ outcome + treatment, family = poisson())
# 
# Coefficients:
# (Intercept)     outcome2     outcome3   treatment2   treatment3  
#   3.045e+00   -4.543e-01   -2.930e-01    1.338e-15    1.421e-15  
# 
# Degrees of Freedom: 8 Total (i.e. Null);  4 Residual
# Null Deviance:        10.58 
# Residual Deviance: 5.129  AIC: 56.76
# 
sqrt(diag(vcov(glm.D93)))
# (Intercept)    outcome2    outcome3  treatment2  treatment3 
#   0.1708987   0.2021708   0.1927423   0.2000000   0.2000000 

Again in this case, summary(glm.D93) will produce those values along with additional information about the model.

how does the variance function relate to the standard error of the coefficients?

In this Poisson case, it's mainly how the maximum-likelihood estimate $\hat \lambda$ of the mean also estimates the variance. For an intercept-only model with $n$ observations, this page shows that the standard error of the estimate of the mean is $\sqrt{\hat \lambda/n}$. This answer shows a similar intercept-only result for a negative binomial (in a different parameterization from yours). In both cases you have the inverse relationship of the coefficient standard error to $\sqrt{n}$.

For a single binary predictor and independent observations, as in your model, you essentially have 2 such intercept-only models in the Poisson case, or for the negative binomial when you pre-specify a known $\theta$. For more complicated regressions, in which you need to estimate the negative-binomial $\theta$ or have more predictors whose coefficients need to be estimated together, the standard errors also have to do with how precisely the model fits the data under the modeling assumptions.