Agresti 2007 discusses them. They're in chapter 9 and 10. The 2002 edition probably discusses them too, as @suncoolsu mentioned.
Agresti refers to the group of response variables as a cluster and discusses according analysis with marginal models, conditional models and generalized estimating equations.
Multicollinearity
The issue seems to be that colldiag() needs a model parameter, which is not appended to the model fit by default in multinom(). But if you set model = TRUE in the multinom call, the colldiag function should compute without errors.
require(perturb)
set.seed(123)
a <- sample(4, 100, TRUE)
c <- sample(4, 100, TRUE)
a <- as.ordered(a)
c <- as.factor(c)
colldiag(mod = multinom(c ~ a, model = TRUE), scale = FALSE,
center = FALSE, add.intercept = TRUE)
Output:
Condition
Index Variance Decomposition Proportions
intercept X
1 1.000 0.002 0.141
2 7.255 0.998 0.859
In regards to the output of perturb, I don't have any good answer as to how to interpret the results. The only comment given in the vignette states that
"If collinearity is a serious problem in the data, then the estimates
will be unstable and vary strongly."
Autocorrelation
The dwtest() from {lmtest} should work with multinom() to compute autocorrelation for you, though you will need to convert your factor to a numeric variable.
require(lmtest)
dwtest(multinom(as.integer(c) ~ a))
Output:
Durbin-Watson test
data: multinom(as.integer(c) ~ a)
DW = 1.7298, p-value = 0.08517
alternative hypothesis: true autocorrelation is greater than 0
Especially for users of the mlogit
function from the {mlogit}
package:
multinom()
from {lmtest}
did not work for me, it says in my just downloaded package version that this function doesn't exist. I had to reshape my data from a form fit for the mlogit
function to a dataframe form where each item has only one line (instead of the number of potential outcomes per item). Then I could run this code:
require(lmtest)
dwtest(as.numeric(outcome) ~ predictors, data=dat)
.
Best Answer
You can treat it like a log-linear model: for response categories $i$ and covariate patterns $j$, the Pearson residual is given by $$\newcommand{\var}{\mathop{\mathrm{Var}}} r_{ij} = \frac{y_{ij} - \hat{\mu}_{ij}}{\sqrt{\widehat{\var Y_{ij}}}} =\frac{y_{ij} - \hat{\mu}_{ij}}{\sqrt{\hat{\mu}_{ij}}}$$, where $y_{ij}$ is the observed count and $\hat{\mu}_{ij}$ the expected count according to your fitted model.