Basically what the title says. I have run a logistic regression with an ordinal independent variable (scale from 1-7) and get an odds-ratio of 2. I am unsure of how I interpret this? Do I find the odds of being in the "yes category" of the binary dependent variable given that the independent variable is 1 and then report that it doubles with each increase in the independent variable?
Solved – Logistic regression with ordinal independent variable: How to interpret odds-ratio
logisticodds-ratioordinal-dataregression
Related Solutions
The problem with ordinal independent variable is that since, by definition, the true metric intervals between its levels are not known, no appropriate type relationship - apart from umbrella "monotonic" - can be assumed apriori. We have to do something about it, for example - to "screen or to combine variants" or to "prefer what maximizes something".
If you insist on treating your likert rating IV as ordinal (rather than interval or nominal) I've got a pair of alternatives for you.
- Use polynomial contrasts I.e. each such predictor used in the model enters not only linearly but also quadratically and cubically. So, not only linear, but more general, monotonic effect can be captured (the linear effect corresponds to the predictor kept as scale/interval and the other two effects tastes it as having nonqual intervals). Additionally, dummies of each predictor could be entered as well, which will test for the nominal/factorial effect. In the end of all that, you know how much your predictor acts as factor, how much as linear covariate, and how much as nonlinear covariate. This option is easy to do in almost any regression (linear, logistic, other generalized-linear models). It will consume dfs, so the sample size should be large enough.
- Use optimal scaling regression. This approach transforms monotonically an ordinal predictor into an interval one so as to maximize linear effect on the predictand. CATREG (categorical regression) is an implementation of this idea in SPSS. One problem of your specific case is that you want to do logistic, not linear regression but CATREG is not logit model based. I think this obstacle is relatively minor since your predictand is only 2-category (binary): I mean you might still do CATREG for optimal scaling, then do final logistic regression with the optained transformed scale predictors.
- Note also that in simple case of one scale or ordinal DV and one ordinal IV Jonckheere-Terpstra test might be a reasonable analysis instead of regression.
There could be other suggestions, too. The three above are what come to my mind just instantly reading your question.
Let me recommend you also to visit these threads: Associating between nominal and scale or ordinal; Associating between ordinal and scale. They could be helpful despite that they are not about specifially regressions.
But these threads are about regressions, particularly logistic: you must look inside: one, two, three, four, five.
None of those interpretations are quite right. I think you have to connect a few concepts first. (Numbering ideas here that don't really relate to your own numbers there).
Conditional logistic regression only differs from "ordinary" logistic regression in that the analysis is based on matches sets, so in interpreting the effects you must state what you are controlling for, or the matching in some regard. For instance, if this were a twin's analysis, you would say something like "Smoking was associated with a 2-fold difference in the odds of psychiatric disorder among twins".
The (exponentiated) coefficient for an interaction (or product) term in a logistic regression is not an odds ratio, it is a ratio of odds ratios or an odds ratio ratio (ORR). The point is that you never observe a "difference" or "increase" in the product term without a difference in the lower level terms... so the standard interpretation doesn't apply.
In a logistic regression model, the interpretation of an (exponentiated) coefficient term for an interaction (say between X and W) is like the following. "For a unit difference in W, the ratio of odds ratio of Y and X is $\exp(\gamma)$".
Best Answer
It depends on how you coded it/entered it into your statistical model. If you entered the independent variable as a continuous variable (i.e. just entering the values 1-7), your interpretation is correct: the odds ratio of 2 indicates that the odds of the dependent variable being 1/'yes' increases by a factor of 2 for every point above zero.
If however, you would indicate that the independent variable (the scale from 1-7) is a categorical one (by creating binary dummy indicator variables for each level except a reference level), you would obtain the odds ratios of six of the seven possible values compared to the reference level. The advantage over the above method is that categorizing this way you do not assume a linear or proportional effect along the scale. Do note this costs 6 degrees of freedom instead of 1, and therefore could require more data to obtain viable results.