The specific values on your scale (i.e., 0 through 10 values) are irrelevant; only their order matters. The various ordered models with their myriad of names only focus on the order and discard the specific scale values. (The specific values are only really relevant when data has been rounded or categorized, neither of which has occurred in your case.)
Other than ensuring that you have a few observations in each of your groups I would not worry too much about the proportion of observations in each group (I can't think of any statistical justification and pretty much no real-world data set would meet such a requirement anyway).
One other thing to keep in mind is that you will get a bit more statistical power if you estimate the model using all 11 data points and apply the Net Promoter Score coding only at the time of prediction (however, if doing this, you would want to check that the parameter estimates for the independent variables are approximately the same either way).
First off, are your two independent variables being adjusted as factors or numerically coded responses and is there an interaction term for the two? The reason I ask is because the test of proportional odds grows very sensitive with small cell counts. For this reason, I often find it justifiable to adjust input variables as their ordinally coded values (1: poor, 2: fair-to-poor, etc.). Doing so allows information to be borrowed across groups, proportionality is assessed so that an associated difference in the odds of a more favorable response comparing units differing by 1 in the predictor are consistent with odds of an even more favorable response (the rough and contrived interpretation of the test of proportional odds).
If your numeric coding still fails to give valid proportionality, it is possible to get consistent cumulative odds ratios estimates by collapsing adjacent categories like the two bottom box responses.
Thirdly, another powered test of association between an ordinal response and two ordinal factors is a plain old linear regression model. Using robust standard errors, you get valid confidence intervals despite the distribution of the errors. This tends to be less powerful that categorical methods, but with fewer pitfalls due to zero cell counts.
Lastly, as a comment, robust standard errors allow consistent estimation of the mean model in most circumstances. I'm not sure if these are implemented in SPSS, but R and SAS use these frequently. As with the proportional hazards assumption in the Cox model, when this "model based assumption check" fails, it does not mean the model results are entirely invalid, it's just that the effect estimates are "averaged" over their inconsistent proportionality. For instance, if proportional odds model has excessive numbers of respondents giving top box responses, and a predictor shows a large association for the top box response but smaller association for other cumulative measures, then you'll find that the cumulative odds ratio is a weighted combination of the several thresholded odds ratios, with a higher weight placed upon the top box OR.
Best Answer
The parallel regression assumption (aka proportional regression assumption) in ordinal logistic regression says that the coefficients that describe the odds of being in the lowest category vs. all higher categories of the response variable are the same as those that describe the odds between the second lowest category and all higher categories, etc.
This is a consequence of how the ordered logistic model is defined. It is a consequence of the fact that there is only one set of coefficients for all odds you're modeling (lowest category vs. all higher, second lowest vs. all higher, etc). If you include quadratic or non-linear terms then it is still the same set of coefficients for all odds you're modeling. So, whether or not you add quadratic or non-linear terms has nothing to do with the parallel regression assumption.