First off, are your two independent variables being adjusted as factors or numerically coded responses and is there an interaction term for the two? The reason I ask is because the test of proportional odds grows very sensitive with small cell counts. For this reason, I often find it justifiable to adjust input variables as their ordinally coded values (1: poor, 2: fair-to-poor, etc.). Doing so allows information to be borrowed across groups, proportionality is assessed so that an associated difference in the odds of a more favorable response comparing units differing by 1 in the predictor are consistent with odds of an even more favorable response (the rough and contrived interpretation of the test of proportional odds).
If your numeric coding still fails to give valid proportionality, it is possible to get consistent cumulative odds ratios estimates by collapsing adjacent categories like the two bottom box responses.
Thirdly, another powered test of association between an ordinal response and two ordinal factors is a plain old linear regression model. Using robust standard errors, you get valid confidence intervals despite the distribution of the errors. This tends to be less powerful that categorical methods, but with fewer pitfalls due to zero cell counts.
Lastly, as a comment, robust standard errors allow consistent estimation of the mean model in most circumstances. I'm not sure if these are implemented in SPSS, but R and SAS use these frequently. As with the proportional hazards assumption in the Cox model, when this "model based assumption check" fails, it does not mean the model results are entirely invalid, it's just that the effect estimates are "averaged" over their inconsistent proportionality. For instance, if proportional odds model has excessive numbers of respondents giving top box responses, and a predictor shows a large association for the top box response but smaller association for other cumulative measures, then you'll find that the cumulative odds ratio is a weighted combination of the several thresholded odds ratios, with a higher weight placed upon the top box OR.
Results from an ordered logit/probit regression are always unintuitive, but categorical explanatory variables are as meaningful as continuous ones. I'd even say that they are easier to interpret.
For a concrete example, you could look at Dobson, An Introduction to Generalizer Linear Models, 2002, 2nd ed., Chapter 8. In her "car preferences" example, the dependent variable is the importance of air conditioning and power steering (three levels: "no or little importance", "important", "very important") and the two explanatory variables are gender (male or female, coded as 1 and 0) and age (18-23, 24-40, >40, coded as age2440 = 1 or 0, and agegt40 = 1 or 0).
Fitting an ordered probit model you get (I've used R, MASS library, polr() function):
Coefficients:
male age2440 agegt40
-0.3467 0.6817 1.3288
Intercepts:
NoImp|Imp Imp|VeryImp
0.01844 0.97594
Then you can compute the probabilities for women (male = 0) over 40 (age2440 = 0, agegt40 = 1):
NoImp Imp VeryImp
0.095 0.267 0.638
and for men over 40 (male = 1):
NoImp Imp VeryImp
0.168 0.330 0.502
Their difference is the gender partial effect:
NoImp Imp VeryImp
-0.073 -0.063 0.136
I think that it's meaningful ;-)
Best Answer
It looks to me that you are looking for the "partial" version of proportional odds. Reference:
R. S. Society, “Partial Proportional Odds Models for Ordinal Response Variables,” vol. 39, no. 2, pp. 205–217, 1999.
If in the "standard" ordered logit (or proportional odds), the cumulative probability is modeled as
$$ P(Y > j | X_i) = \frac{1}{1 + \exp(-\alpha_j - X_i \beta)} $$
where $\alpha$ is the vector of thresholds (as many as the number of classes - 1) and $\beta$ is the vector of coefficients. In the partial version of the proportional odds model, the cumulative probability takes instead the more general form $$ P(Y > j | X_i) = \frac{1}{1 + \exp(-\alpha_j - X_i \beta - T_i \gamma_j )} $$ where $T$ is a vector containing the values of observation $i$ on that subset of the explanatory variables for which the proportional odds assumption is either not assumed or not verified, and $\gamma_i$ is a vector of coefficients (to be estimated) associated with the variables in $T$.