Solved – statistical method for comparing “slopes” of logistic regressions

ancovabinary datalogisticmixed modelmultiple regression

I would like to compare the slopes of multiple logistic regressions, like an ANCOVA. Can a logistic regression even have a "slope" ? My regression lines themselves appear quite linear.

My dependent variable is binary (active vs. not active), while the independent variable is continuous, and there are multiple (6) levels to my categorical predictor. I would like to know how different these regression lines are from each other.

Right now I have separated my data into 6 logistic models, and I have a log odds ratio (instead of a slope) as my coefficient for each. For linear models, I can discuss a lot about the slopes of linear regression lines, along with R squared and p values, but I am unsure how to interpret the log odds ratio, other than converting it into a proportion (I've read other threads on CV).

Is there a way to do something like an ANCOVA for a multiple logistic regression? I am using R.

EDIT:
Some additional information on my data and whole-model analysis attempt:
I am using a generalized linear mixed effects model (glmer function in R) and so I have a random effect ("Pond") and I've nested individual "ID" to account for repeated measures. "Treat" represents the density levels. I have been getting various warning messages and I worry the output I get cannot be trusted, see below:

 m1 <- glmer(Act_01~Length*Treat+(1|Pond/ID), data=tad, family=binomial)

Warning messages:
1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  unable to evaluate scaled gradient
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv,  :
  Model failed to converge: degenerate  Hessian with 4 negative eigenvalues

 mcomp<- glht(m1, linfct=mcp(Treat="Tukey"))

Warning messages:
1: In mcp2matrix(model, linfct = linfct) :
  covariate interactions found -- default contrast might be inappropriate
2: In vcov.merMod(model) :
  variance-covariance matrix computed from finite-difference Hessian is
not positive definite or contains NA values: falling back to var-cov estimated from RX
 summary(mcomp)

 Simultaneous Tests for General Linear Hypotheses
Multiple Comparisons of Means: Tukey Contrasts
Fit: glmer(formula = Act_01 ~ Length * Treat + (1 | Pond/ID), data = tad, 
    family = binomial)
Linear Hypotheses:

              Estimate Std. Error z value Pr(>|z|)    
A2 - A1 == 0    2.492      3.199   0.779  0.97105    
A3 - A1 == 0    6.704      3.265   2.053  0.31163    
A4 - A1 == 0    3.853      3.182   1.211  0.83142    
A5 - A1 == 0   -4.340      3.348  -1.296  0.78673    
A6 - A1 == 0   -6.516      3.269  -1.993  0.34535    
A3 - A2 == 0    4.212      3.037   1.387  0.73428    
A4 - A2 == 0    1.361      2.903   0.469  0.99718    
A5 - A2 == 0   -6.831      3.038  -2.248  0.21499    
A6 - A2 == 0   -9.008      3.024  -2.978  0.03423 *  
A4 - A3 == 0   -2.851      2.961  -0.963  0.92938    
A5 - A3 == 0  -11.044      3.210  -3.440  0.00766 **  
A6 - A3 == 0  -13.220      3.018  -4.381  < 0.001 ***    
A5 - A4 == 0   -8.193      3.066  -2.672  0.08062 .  
A6 - A4 == 0  -10.369      2.966  -3.496  0.00632 **      
A6 - A5 == 0   -2.176      3.181  -0.684  0.98369

Signif. codes:  0  ‘***’  0.001 ‘**’  0.01 ‘*’  0.05 ‘.’  0.1 ‘ ’  1
(Adjusted p values reported -- single-step method)****

Best Answer

As mdewey notes, the best approach is to fit a single model with an interaction term to capture the group differences you are interested in.

It's difficult for readers to solve the issues raised by your warning messages without more information. But a common recommendation is to simplify your random effects structure till warnings stop, in order to understand what might be causing them.

Related Question