Solved – Comparison of two odds ratios: Take 2

effect-sizelogisticodds-ratior

I would like to test the difference of two odds ratios given the following R-output:

f=with(data=imp, glm(Y~X1+X2, family=binomial(link="logit")))
s01=summary(pool(f1))
s01
                    est        se         t       df   Pr(>|t|) 

  (Intercept) -1.7805826 0.1857663 -9.585070 391.0135 0.00000000 
  X1           0.2662796 0.1308970  2.034268 390.4602 0.04259997  
  X2           0.6757952 0.3869652  1.746398 395.6098 0.08151794 
cbind(exp(s01[, c("est", "lo 95", "hi 95")]), pval=s01[, "Pr(>|t|)"])
                            est     lo 95     hi 95       pval
             (Intercept) 0.1685399 0.1169734 0.2428389 0.00000000
             X1          1.3051000 1.0089684 1.6881459 0.04259997
             X2          1.9655955 0.9185398 4.2062035 0.08151794

To do so, I would need to take the difference of the log odds and obtain the standard error (outlined here: Statistical test for difference between two odds ratios?).

One of the predictor variables is continuous and I am not sure how I could compute the values required for $SE(logOR)$.

Could someone please explain whether the output I have is conducive to this method?

Best Answer

If you want to test whether exp(X1) is statistically different from exp(X2) then you do not need to perform any statistical tests in this case.

Just observe the confidence intervals. In general, we cannot infer that point estimates are statistically different when the confidence intervals overlap, but this is a special case.....

The confidence interval for exp(X1) is (1.01, 1.68) while for exp(X2) it is (0.92, 4.21)

The confidence interval for exp(X1) is completely contained within the confidence interval for exp(X2)

Therefore they are not statistically different.