I'm analysing survey results with most responses coming in the form of Likert scales. Many of these scales have either very few or 0 responses in the bottom categories. As you can imagine, this is leading to some complications when I try to run my models. I’m still getting encouraging results, but want to be sure that my information is valid before reporting it. Below is a scenario I’ve been working with over the last couple of days:

I’m using SPSS to run an ordinal regression with two predictors. In this case, the predictors themselves are actually responses on a Likert scale (but entered into the model as nominal variables). My DV is, of course, also an ordinal scale. My two predictors each have five categories (levels on the scale). My dependent variable also has 5 levels. Just like with my predictor variables, the dependent variable has very few observations in the bottom categories. In fact when I run the regression, it says that 47.5% of cells have frequencies of 0. Yet all my coefficients are significant, the overall model fit (-2 log likelihood) is significant at .000, and the odds ratios (exponentiated form of my coefficients) all seem reasonable. The model looks like a good one other than these cells with frequencies of 0.

My test is failing the proportional odds assumption, which says that the coefficients for each predictor category must be equal across all DV levels. I know this based upon the results of the Test of Parallel lines, which SPSS reports as part of the ordinal regression output. So, on the recommendation of an article I found online, I've done two things to explore further. First, I've run separate **logistic** regressions with new dependent variables, each one representing a cutpoint in my original DV— in other words, they indicate whether Y is less than each of my original DV categories (excluding the bottom one). So my new DV's are level 2 or above vs. not; level 3 or above vs. not; etc.. These did not yield significant relationships for most IV-DV combinations (cells). The idea is to compare the odds ratios across the different cutpoints to see if they’re fairly constant. In my case, since few are significant, they’re not.

The second thing I’ve done is estimated separate **ordinal** regressions using my original dependent variable— I did one model for each category in my predictors, coded as dummies. So, in 10 separate models (2 predictor variables with 5 categories each), my single predictor would be: a 1 for level 2 and a 0 for all other levels; or a 1 for level 3 and a 0 for all other levels; etc.. For most of these categories, the parallel lines is failed (i.e. the null that the proportional odds assumption is upheld is proven true – a good thing). However, in a couple of these categories, I have no observations (nobody responded Very Poor or Poor on one of my predictor Likert scales). Thus I cannot get a parallel lines P-value for these categories.

**My question has two parts**.

One is whether it is the bottom levels of the predictor variables that are causing the parallel lines test to fail… and if the reason is that there are no observations in these categories, whether I can still use the overall odd's ratios from my full model. I think this should be no problem since these categories automatically drop out of the model.

The second question is whether instead, it might be the low/0 frequencies in the bottom levels of my DV that is causing the parallel lines test to fail. I don’t think it is based on the fact that the test is passed for all predictor variable categories that have observations in them. I have tried combining the bottom categories of my DV, and this decreases the % of cells with frequencies of 0, but does not totally eliminate the problem.

Many thanks for taking the time to consider my question. I would be tremendously grateful for any guidance that you can provide.

## Best Answer

First off, are your two independent variables being adjusted as factors or numerically coded responses and is there an interaction term for the two? The reason I ask is because the test of proportional odds grows very sensitive with small cell counts. For this reason, I often find it justifiable to adjust input variables as their ordinally coded values (1: poor, 2: fair-to-poor, etc.). Doing so allows information to be borrowed across groups, proportionality is assessed so that an associated difference in the odds of a more favorable response comparing units differing by 1 in the predictor are consistent with odds of an even more favorable response (the rough and contrived interpretation of the test of proportional odds).

If your numeric coding still fails to give valid proportionality, it is possible to get consistent cumulative odds ratios estimates by collapsing adjacent categories like the two bottom box responses.

Thirdly, another powered test of association between an ordinal response and two ordinal factors is a plain old linear regression model. Using robust standard errors, you get valid confidence intervals despite the distribution of the errors. This tends to be less powerful that categorical methods, but with fewer pitfalls due to zero cell counts.

Lastly, as a comment, robust standard errors allow consistent estimation of the mean model in most circumstances. I'm not sure if these are implemented in SPSS, but R and SAS use these frequently. As with the proportional hazards assumption in the Cox model, when this "model based assumption check" fails, it does not mean the model results are entirely invalid, it's just that the effect estimates are "averaged" over their inconsistent proportionality. For instance, if proportional odds model has excessive numbers of respondents giving top box responses, and a predictor shows a large association for the top box response but smaller association for other cumulative measures, then you'll find that the cumulative odds ratio is a weighted combination of the several thresholded odds ratios, with a higher weight placed upon the top box OR.