Solved – How to understand output from R’s polr function (ordered logistic regression)

logisticr

I am new to R, ordered logistic regression, and polr.

The "Examples" section at the bottom of the help page for polr (that fits a logistic or probit regression model to an ordered factor response) shows

options(contrasts = c("contr.treatment", "contr.poly"))
house.plr <- polr(Sat ~ Infl + Type + Cont, weights = Freq, data = housing)
pr <- profile(house.plr)
plot(pr)
pairs(pr)
  • What information does pr contain? The help page on profile is
    generic, and gives no guidance for polr.

  • What is plot(pr) showing? I see six graphs. Each has an X axis that is
    numeric, although the label is an indicator variable (looks like an input variable that is an indicator for an ordinal value). Then the Y axis
    is "tau" which is completely unexplained.

  • What is pairs(pr) showing? It looks like a plot for each pair of input
    variables, but again I see no explanation of the X or Y axes.

  • How can one understand if the model gave a good fit?
    summary(house.plr) shows Residual Deviance 3479.149 and AIC (Akaike
    Information Criterion?) of 3495.149. Is that good? In the case those
    are only useful as relative measures (i.e. to compare to another model
    fit), what is a good absolute measure? Is the residual deviance approximately chi-squared distributed? Can one use "% correctly predicted" on the original data or some cross-validation? What is the easiest way to do that?

  • How does one apply and interpret anova on this model? The docs say "There are methods for the standard model-fitting functions, including predict, summary, vcov, anova." However, running anova(house.plr) results in anova is not implemented for a single "polr" object

  • How does one interpret the t values for each coefficient? Unlike some
    model fits, there are no P values here.

I realize this is a lot of questions, but it makes sense to me to ask as one bundle ("how do I use this thing?") rather than 7 different questions. Any information appreciated.

Best Answer

I would suggest you look at books on categorical data analysis (cf. Alan Agresti's Categorical Data Analysis, 2002) for better explanation and understanding of ordered logistic regression. All the questions that you ask are basically answered by a few chapters in such books. If you are only interested in R related examples, Extending Linear Models in R by Julian Faraway (CRC Press, 2008) is a great reference.

Before I answer your questions, ordered logistic regression is a case of multinomial logit models in which the categories are ordered. Suppose we have $J$ ordered categories and that for individual $i$, with ordinal response $Y_i$, $p_{ij}=P(Yi=j)$ for $j=1,..., J$. With an ordered response, it is often easier to work with the cumulative probabilities, $\gamma_{ij}=P(Y_i \le j)$. The cumulative probabilities are increasing and invariant to combining adjacent categories. Furthermore, $\gamma_{iJ}=1$, so we need only model $J–1$ probabilities.

Now we want to link $\gamma_{ij}$s to covariates $x$. In your case, Sat has 3 ordered levels: low, medium, high. It makes more sense to treat them as ordered rather than unordered. The remaining variables are your covariates. The specific model that you are considering is the proportional odds model and is mathematically equivalent to:

$$\mbox{logit } \gamma_j(x_i) = \theta_j - \beta^T x_i, j = 1 \ldots J-1$$ $$\mbox{where }\gamma_j(x_i)=P(Y_i \le j | x_i)$$

It is so called because the relative odds for $Y \le j$ comparing $x_1$ and $x_2$ are:

$$\left(\frac {\gamma_j(x_1)}{1-\gamma_j(x_1)}\right) / \left(\frac {\gamma_j(x_2)}{1-\gamma_j(x_2)}\right)=\exp(-\beta^T (x_1-x_2))$$

Notice, the above expression does not depend on $j$. Of course, the assumption of proportional odds does need to be checked for a given dataset.

Now, I will answer some (1, 2, 4) questions.

How can one understand if the model gave a good fit? summary(house.plr) shows Residual Deviance 3479.149 and AIC (Akaike Information Criterion?) of 3495.149. Is that good? In the case those are only useful as relative measures (i.e. to compare to another model fit), what is a good absolute measure? Is the residual deviance approximately chi-squared distributed? Can one use "% correctly predicted" on the original data or some cross-validation? What is the easiest way to do that?

A model fit by polr is a special glm, so all the assumptions that hold for a traditional glm hold here. If you take care of the parameters properly, you can figure out the distribution. Specifically, to test the if the model is good or not you may want to do a goodness of fit test, which test the following null (notice this is subtle, mostly you want to reject the null, but here you don't want to reject it to get a good fit):

$$H_o: \mbox{ current model is good enough }$$

You would use the chi-square test for this. The p-value is obtained as:

1-pchisq(deviance(house.plr),df.residual(house.plr))

Most of the time you'd hope to obtain a p-value greater than 0.05 so that you don't reject the null to conclude that the model is good fit (philosophical correctness is ignored here).

AIC should be high for a good fit at the same time you don't want to have a large number of parameters. stepAIC is a good way to check this.

Yes, you can definitely use cross validation to see if the predictions hold. See predict function (option: type = "probs") in ?polr. All you need to take care of is the covariates.

What information does pr contain? The help page on profile is generic, and gives no guidance for polr

As pointed by @chl and others, pr contains all the information needed for obtaining CIs and other likelihood related information of the polr fit. All glms are fit using iteratively weighted least square estimation method for the log likelihood. In this optimization you obtain a lot of information (please see the references) which will be needed for calculating Variance Covariance Matrix, CI, t-value etc. It includes all of it.

How does one interpret the t values for each coefficient? Unlike some model >fits, there are no P values here.

Unlike normal linear model (special glm) other glms are don't have the nice t-distribution for the regression coefficients. Therefore all you can get is the parameter estimates and their asymptotic variance covariance matrix using the max-likelihood theory. Therefore:

$$\text{Variance}(\hat \beta) = (X^T W X)^{-1}\hat \phi$$

Estimate divided by its standard error is what BDR and WV call t-value (I am assuming MASS convention here). It is equivalent to t-value from normal linear regression but does not follow a t-distribution. Using CLT, it is asymptotically normally distributed. But they prefer not to use this approx (I guess), hence no p-values. (I hope I am not wrong, and if I am, I hope BDR is not on this forum. I further hope, someone will correct me if I am wrong.)