I'm using glmer()
with a binomial response variable. My optimal model has two fixed effects (flow and DNA) which in summary() show a non-significant p value but when I remove each fixed effect in turn from the model the likelihood ratio test comparing the two models shows a significant p value. I'm struggling to understand (1) if this is normal, and (2) how to report the results if the explanatory variables "flow" and "DNA" are important but their p values in the model are well above 0.05?
Optimal model:
a25 <- glmer(Status_qpcr~(1|Root)+Flow+DNA,
family=binomial, data=spore)
summary(a25)
Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: Status_qpcr ~ (1 | Root) + Flow + DNA
Data: spore
AIC BIC logLik deviance df.resid
72.9 81.0 -32.4 64.9 52
Scaled residuals:
Min 1Q Median 3Q Max
-2.9318 -0.8163 0.4435 0.6848 1.6133
Random effects:
Groups Name Variance Std.Dev.
Root (Intercept) 0.3842 0.6199
Number of obs: 56, groups: Root, 9
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.97752 0.79252 -1.233 0.217
Flow 3.82779 2.27165 1.685 0.092 .
DNA 0.01616 0.01039 1.556 0.120
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) Flow Flow -0.775
DNA -0.576 0.227
Likelihood ratio test:
a26 <- update(a25,~.-DNA)
anova(a25,a26)
Data: spore
Models:
a26: Status_qpcr ~ (1 | Root) + Flow
a25: Status_qpcr ~ (1 | Root) + Flow + DNA
Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq)
a26 3 74.802 80.878 -34.401 68.802
a25 4 72.897 80.998 -32.448 64.897 3.9049 1 0.04815 *
a27 <- update(a25,~.-Flow)
anova(a25,a27)
Data: spore
Models:
a27: Status_qpcr ~ (1 | Root) + DNA
a25: Status_qpcr ~ (1 | Root) + Flow + DNA
Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq)
a27 3 78.440 84.723 -36.220 72.440
a25 4 72.897 80.998 -32.448 64.897 7.5427 1 0.006025 **
Best Answer
It looks like you are seeing the difference between Wald p-values (based on the curvature of the log-likelihood surface at the maximum likelihood estimate) and likelihood ratio test p-values (based on comparisons between the full and reduced models).
tpr <- profile(a25,which="beta_"); lattice::xyplot(tpr)
. You should see that the lines are far from straight (straight lines would indicate a log-quadratic likelihood surface, which is what's assumed by Wald p-values)confint(a25,which="beta_")
(likelihood ratio intervals) andconfint(a25,which="beta_",method="Wald")
; they should be quite different.LRT CI/p-values are essentially always better than the Wald equivalents (but much slower to compute, which is why Wald p-values are the default in
summary()
).