Solved – When to use robust standard errors in Poisson regression

poisson distributionrobust

I am using a Poisson regression model for count data and am wondering whether there are reasons not to use the robust standard error for the parameter estimates? I am particularly concerned as some of my estimates without robust are not significant (e.g., p=0.13) but with robust are significant (p<0.01).

In SAS this is available by using the repeated statement in proc genmod (e.g., repeated subject=patid;). I've been using http://www.ats.ucla.edu/stat/sas/dae/poissonreg.htm as an example which cites a paper by Cameron and Trivedi (2009) in support of using robust standard errors.

Best Answer

In general if you have any suspicion that your errors are heteroskedastic, you should use robust standard errors. The fact that your estimates become non-significant when you don't use robust SEs suggests (but does not prove) the need for robust SEs! These SEs are "robust" to the bias that heteroskedasticity can cause in a generalized linear model.

This situation is a little different, though, in that you're layering them on top of Poisson regression.

Poisson has a well known property that it forces the dispersion to be equal to the mean, whether or not the data supports that. Before considering robust standard errors, I would try a Negative Binomial regression, which does not suffer from this problem. There is a test (see the comment) to help determine whether the resultant change in standard errors is significant.

I do not know for sure whether the change you're seeing (moving to robust SEs narrows the CI) implies under-dispersion, but it seems likely. Take a look at the appropriate model (I think negative binomial, but a quick googling also suggests quasi-Poisson for under-dispersion?) and see what you get in that setting.