Solved – Inference about the outcomes of two logistic regressions

logisticmodel selectionstatistical significance

I have run two separate logistic regressions and would like to assess which model fits the data better. Each model has 1 predictor.

Here's the output for both:

Model 1:

Deviance Residuals: 
Min      1Q  Median      3Q     Max  
-1.280  -1.046  -1.046   1.078   1.315  

Coefficients:
             Estimate Std. Error z value Pr(>|z|)  
(Intercept)  -0.3185     0.1643  -1.938   0.0526 .
L1            0.5564     0.2317   2.402   0.0163 *

Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 421.32  on 303  degrees of freedom
Residual deviance: 415.49  on 302  degrees of freedom
AIC: 419.49

Model 2:

glm(formula = Response ~ L2, family = binomial, data = log)

Deviance Residuals: 
Min       1Q   Median       3Q      Max  
-1.6651  -0.7235  -0.7235   0.7585   1.7138  

Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept)  -1.2068     0.1927  -6.264 3.75e-10 ***
 L2            2.3054     0.2687   8.580  < 2e-16 ***

Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 421.32  on 303  degrees of freedom
Residual deviance: 334.99  on 302  degrees of freedom
AIC: 338.99

Number of Fisher Scoring iterations: 4

Is there something similar to an R square for logistic regression that I could use to assess how well it fits, and also that I can compare to the other model?

Similarly if I run the lrm command from the rms package, which discrimination index is best to look at and how can I compare the two models?

Best Answer

As your models are run on the same data, but aren't nested, you're probably best off using the AIC (Akaike information criterion), which is a measure of model fit - arguably a better one than R square, which has all manner of problems.

For AIC, a lower number is better, but there's no formal tests for it - you'll still have to make a subjective evaluation of what you think "better" means if you're choosing between two models, but the AIC values should at least provide some guidance.

You should also look at the residuals of your model fits and the like, because one of the things AIC will not tell you is if all your models fit poorly. It can just tell you if one fits relatively better than another.

Related Question