Solved – Relative importance of predictors – Standardized coefficients in Ordinal Logistic Regression

ordered-logitregressionspss

I am a 4th year psychology student. I need some help in understanding the coefficients in ORDINAL logistic regression. According to Williams (2009) "Using Heterogeneous Choice Models To Compare Logit and Probit Coefficients Across Groups", the predictor variables and residuals are already standardized to the logit distribution (variance = π*π/3 ), and, therefore, so are the reported coefficients in SPSS. Therefore, in order to compare the relative predictive strength of my variables in the model, I should just be able to directly compare the coefficients (or the odds ratios). However, how do I account for the differences in CI/standard error in my comparisons?

For example,

variable 1: B=.021, std error = .0068, Exp(B) = 1.022, 95% CI = 1.008 to 1.035

variable 2: B=.051, std error = .0174, Exp(B) = 1.052, 95% CI = 1.017 to 1.089

From comparison of Bs – Variable 2 is the stronger predictor, but the std error and CI are much larger. So, what conclusion can I make?

Best Answer

In general, if your predictors are on different metrics, then the subjective assessment of variable importance can not be easily made by simply comparing the raw sizes of the odds ratios.

If all your predictors are continuous, then I think converting the variables to z-scores would be useful for getting a sense of their relative importance. You mention that you have a skewed numeric predictor. I don't think changes anything too much for whether z-scores are appropriate. Ultimately, you have a separate issue of whether you want to apply a shape transformation (z-scores just change mean and variance). If your variable is highly skewed, then consider a transformation, and then z-score the transformed variable.

Some times, you have binary predictors. In that case, the 0-1 scoring is quite intuitive, especially if you have a few such variables.