Here is what I usually like doing (for illustration I use the overdispersed and not very easily modelled quine data of pupil's days absent from school from MASS
):
Test and graph the original count data by plotting observed frequencies and fitted frequencies (see chapter 2 in Friendly) which is supported by the vcd
package in R
in large parts. For example, with goodfit
and a rootogram
:
library(MASS)
library(vcd)
data(quine)
fit <- goodfit(quine$Days)
summary(fit)
rootogram(fit)
or with Ord plots which help in identifying which count data model is underlying (e.g., here the slope is positive and the intercept is positive which speaks for a negative binomial distribution):
Ord_plot(quine$Days)
or with the "XXXXXXness" plots where XXXXX is the distribution of choice, say Poissoness plot (which speaks against Poisson, try also type="nbinom"
):
distplot(quine$Days, type="poisson")
Inspect usual goodness-of-fit measures (such as likelihood ratio statistics vs. a null model or similar):
mod1 <- glm(Days~Age+Sex, data=quine, family="poisson")
summary(mod1)
anova(mod1, test="Chisq")
Check for over / underdispersion by looking at residual deviance/df
or at a formal test statistic (e.g., see this answer). Here we have clearly overdispersion:
library(AER)
deviance(mod1)/mod1$df.residual
dispersiontest(mod1)
Check for influential and leverage points, e.g., with the influencePlot
in the car
package. Of course here many points are highly influential because Poisson is a bad model:
library(car)
influencePlot(mod1)
Check for zero inflation by fitting a count data model and its zeroinflated / hurdle counterpart and compare them (usually with AIC). Here a zero inflated model would fit better than the simple Poisson (again probably due to overdispersion):
library(pscl)
mod2 <- zeroinfl(Days~Age+Sex, data=quine, dist="poisson")
AIC(mod1, mod2)
Plot the residuals (raw, deviance or scaled) on the y-axis vs. the (log) predicted values (or the linear predictor) on the x-axis. Here we see some very large residuals and a substantial deviance of the deviance residuals from the normal (speaking against the Poisson; Edit: @FlorianHartig's answer suggests that normality of these residuals is not to be expected so this is not a conclusive clue):
res <- residuals(mod1, type="deviance")
plot(log(predict(mod1)), res)
abline(h=0, lty=2)
qqnorm(res)
qqline(res)
If interested, plot a half normal probability plot of residuals by plotting ordered absolute residuals vs. expected normal values Atkinson (1981). A special feature would be to simulate a reference ‘line’ and envelope with simulated / bootstrapped confidence intervals (not shown though):
library(faraway)
halfnorm(residuals(mod1))
Diagnostic plots for log linear models for count data (see chapters 7.2 and 7.7 in Friendly's book). Plot predicted vs. observed values perhaps with some interval estimate (I did just for the age groups--here we see again that we are pretty far off with our estimates due to the overdispersion apart, perhaps, in group F3. The pink points are the point prediction $\pm$ one standard error):
plot(Days~Age, data=quine)
prs <- predict(mod1, type="response", se.fit=TRUE)
pris <- data.frame("pest"=prs[[1]], "lwr"=prs[[1]]-prs[[2]], "upr"=prs[[1]]+prs[[2]])
points(pris$pest ~ quine$Age, col="red")
points(pris$lwr ~ quine$Age, col="pink", pch=19)
points(pris$upr ~ quine$Age, col="pink", pch=19)
This should give you much of the useful information about your analysis and most steps work for all standard count data distributions (e.g., Poisson, Negative Binomial, COM Poisson, Power Laws).
Best Answer
I think this is one of the most challenging parts when doing regression analysis. I also struggle with most of the interpretations (in particular binomial diagnostics are crazy!).
I just stumbled on this post http://www.r-bloggers.com/model-validation-interpreting-residual-plots/ who also linked https://web.archive.org/web/20100202230711/http://statmaster.sdu.dk/courses/st111/module04/module.pdf
what helps me the most is to plot the residuals versus every predictive parameter included AND not included into the model. This means also the ones who were dropped beforehand for to multicolinearity reasons. For this boxplots, conditional scatterplots and normal scatterplots are great. this helps to spot possible errors
In "Forest Analytics with R" (UseR Series) are some good explanations how to interpret residuals for mixed effects models (and glms as well). Good read! https://www.springer.com/gp/book/9781441977618
Someday ago I thought about a website that could collect residual patterns which users can vote to be "ok" and to be "not ok". but I never found that website ;)