When trying to determine what sort of glm equation you want to estimate, you should think about plausible relationships between the expected value of your target variable given the right hand side (rhs) variables and the variance of the target variable given the rhs variables. Plots of the residuals vs. the fitted values from from your Normal model can help with this. With Poisson regression, the assumed relationship is that the variance equals the expected value; rather restrictive, I think you'll agree. With a "standard" linear regression, the assumption is that the variance is constant regardless of the expected value. For a quasi-poisson regression, the variance is assumed to be a linear function of the mean; for negative binomial regression, a quadratic function.
However, you aren't restricted to these relationships. The specification of a "family" (other than "quasi") determines the mean-variance relationship. I don't have The R Book, but I imagine it has a table that shows the family functions and corresponding mean-variance relationships. For the "quasi" family you can specify any of several mean-variance relationships, and you can even write your own; see the R documentation. It may be that you can find a much better fit by specifying a non-default value for the mean-variance function in a "quasi" model.
You also should pay attention to the range of the target variable; in your case it's nonnegative count data. If you have a substantial fraction of low values - 0, 1, 2 - the continuous distributions probably won't fit well, but if you don't, there's not much value in using a discrete distribution. It's rare that you'd consider Poisson and Normal distributions as competitors.
Firstly, I don't think you have zero-inflation in your data (or at least the data that you have included in the question). Zero-inflation arises when something (unobserved) results in a zero count/observation even though the other predictors suggest that the observation should be positive. In your case, a (silly) example might be a disgruntled grad student sneaking into the lab and spraying DDT on some of the warm temperature experiments to mess with your head - even though the subjects should have survived at e.g. 28 degrees, some unseen force has prevented this from happening. A less silly example is a recorded zero abundance in habitat data simply because a perfectly suitable area has never been colonised (either by chance or some physical barrier), or a zero recording of parasite counts from a highly susceptible animal that has never been exposed to the parasites. I think people are generally too quick to jump to zero-inflated models simply because of a large number of observed zeros - see also:
Warton, D.I. 2005. Many zeros does not mean zero inflation: comparing the goodness-of-fit of parametric models to multivariate abundance data. Environmetrics 16:275–289.
So if you want to model over-dispersion then I would suggest using an observation level random effect (where 'observation' is the number dead and alive from each group e.g. {10,0} for the first row). I have used this approach successfully for similar analyses, although generally for larger group sizes than 10.
However based on the data you have shown I don't think this is necessary either: all of the observations below 32 degrees are entirely consistent with a common probability of survival (around 97%), and all of the observations above 34 degrees are also entirely consistent with a common probability of survival (around 3%). If you fit an over dispersed model to this then the optimiser will probably reduce the over dispersion component to zero. If this really is your data then what you actually need to fit is a temperature threshold effect (e.g. above/below 33 degrees), which will then describe the data so well that it will in fact be quasi separated ... leading you to potentially more problems! Of course it is also possible that the data you have shown is incomplete and/or a fabricated example, in which case you can ignore this paragraph :)
---- EDIT IN RESPONSE TO EDITED QUESTION ----
The model that you are tying to fit uses a linear effect of temperature, but your data suggests that the effect is not linear (on the logit scale). If you have only a linear effect of temperature then an additional parameter (over dispersion) is needed to suck up the extra unexplained variation in the response, but you may be able to do a better job with a more appropriate effect of temperature. Try the following code for inspiration.
Your data, and a new data frame to use only for visualising predictions:
df <- read.table(header=TRUE, file=textConnection('Temperature Alive Dead
28 10 0
28 10 0
28 9 1
28 10 0
30 10 0
30 10 0
30 10 0
30 10 0
32 9 1
32 9 1
32 9 1
32 10 0
34 0 10
34 0 10
34 0 10
34 2 8
36 0 10
36 0 10
36 0 10
36 0 10'))
df$Response <- cbind(df$Alive, df$Dead)
df$Proportion <- df$Alive / (df$Alive + df$Dead)
df$Replicate <- 1:4
newdata <- data.frame(Temperature=seq(28,36,length=1000))
The model you are using assumes a linear (on the logit scale) effect of temperature, but the plot of the datapoints suggests a more drastic change between 32 and 34 degrees than is consistent with a linear change:
model1 <- glm(Response ~ Temperature, family=binomial, data=df)
extractAIC(model1)
plot(df$Temperature, df$Proportion, pch=df$Replicate)
lines(newdata$Temperature, predict(model1, type='response', newdata=newdata), type='l')
A simple threshold effect of 33 degrees gives a better prediction:
model2 <- glm(Response ~ I(Temperature > 33), family=binomial, data=df)
extractAIC(model2)
plot(df$Temperature, df$Proportion, pch=df$Replicate)
lines(newdata$Temperature, predict(model2, type='response', newdata=newdata), type='l')
An alternative is to use a polynomial expansion to explain a curve with (almost) arbitrary shape - the highest order we can use with your data is 4 but this seems to give the best fit:
model3 <- glm(Response ~ poly(Temperature, 4), family=binomial, data=df)
extractAIC(model3)
plot(df$Temperature, df$Proportion, pch=df$Replicate)
lines(newdata$Temperature, predict(model3, type='response', newdata=newdata), type='l')
I haven't checked, but I suspect that your test for over dispersion would indicate no problems with either models 2 or 3. The obvious problem with model 2 is that we have chosen the threshold based on the data, so this doesn't help you find the threshold itself using the model. For that reason I'd probably use something more like model 3.
Best Answer
Short answer
Overdispersion does not matter when estimating a vector of regression coefficients for the conditional mean in a quasi/poisson model! You will be fine if you forget about the overdispersion here, use glmnet with the poisson family and just focus on whether your cross-validated prediction error is low.
The Qualification follows below.
Poisson, Quasi-Poisson and estimating functions:
I say the above because overdispersion (OD) in a poisson or quasi-poisson model influences anything to do with the dispersion (or variance or scale or heterogeneity or spread or whatever you want to call it) and as such has an effect on the standard errors and confidence intervals but leaves the estimates for the conditional mean of $y$ (called $\mu$) untouched. This particularly applies to linear decompositions of the mean, like $x^\top\beta$.
This comes from the fact that the estimating equations for the coefficients of the conditional mean are practically the same for both poisson and quasi-poisson models. Quasi-poisson specifies the variance function in terms of the mean and an additional parameter (say $\theta$) as $Var(y)=\theta\mu$ (with for Poisson $\theta$=1), but the $\theta$ does not turn out to be relevant when optimizing the estimating equation. Thus the $\theta$ plays no role in estimating the $\beta$ when conditional mean and variance are proportional. Therefore the point estimates $\hat{\beta}$ are identical for the quasi- and poisson models!
Let me illustrate with an example (notice that one needs to scroll to see the whole code and output) :
As you can see even though we have strong overdispersion of 12.21 in this data set (by
deviance(modp)/modp$df.residual
) the regression coefficients (point estimates) do not change at all. But notice how the standard errors change.The question of the effect of overdispersion in penalized poisson models
Penalized models are mostly used for prediction and variable selection and not (yet) for inference. So people who use these models are interested in the regression parameters for the conditional mean, just shrunk towards zero. If the penalization is the same, the estimating equations for the conditional means derived from the penalized (quasi-)likelihood does also not depend on $\theta$ and therefore overdispersion does not matter for the estimates of $\beta$ in a model of the type:
$g(\mu)=x^\top\beta + f(\beta)$
as $\beta$ is estimated the same way for any variance function of the form $\theta \mu$, so again for all models where conditional mean and variance are proportional. This is just like in unpenalized poisson/quasipoisson models.
If you don't want to take this at face value and avoid the math, you can find empirical support in the fact that in
glmnet
, if you set the regularization parameter towards 0 (and thus $f(\beta)=0$) you end up pretty much where the poisson and quasipoisson models land (see the last column below where lambda is 0.005).So what does OD do to penalized regression models? As you may know, there is still some debate about the proper way to calculate standard errors for penalized models (see e.g., here ) and
glmnet
is not outputting any anyway, probably for that reason. It may very well be that the OD would influence the inference part of the model, just as it does in the non-penalized case but unless some consensus regarding inference in this case is reached, we won't know.As an aside, one can leave all this messiness behind if one is willing to adopt a Bayesian view where penalized models are just standard models with a specific prior.