Solved – Need help interpreting lmer output

generalized linear modellme4-nlmepoisson distributionr

I'm not sure whether this belongs here (or Cross Validated), but until somebody tells me otherwise, I'll keep it here.

I initially ran a mixed model to calculate p values from a mixed model:

pos.mod <- function(x) {temp <- round(t(as.data.frame(summary(lme(value ~
pos, random = ~1 | id2, data = x))$tTable[2, c("Value", "p-value")])), 3)}

However the residuals were highly non-normal given that the data were counts, so I wound up using lmer() with a Poisson error distribution.

pos.mod <- function(x) round(summary(glmer(value ~ pos + (1 | pos), 
family="poisson", data = x))$coefficients[2, c(1, 2, 4)], 3)

That seemed to deal with the problem nicely, but I need some help interepreting the results:

    sp  variable Pr(>|z|) Estimate Std. Error Estimate_new.pos Std. Error_new.pos
    Sp1       bn    0.292    0.090      0.085            0.090              0.085
    Sp1      con    0.949    0.015      0.226            0.015              0.226
    Sp1       fn    0.651    0.182      0.403            0.182              0.403
    Sp1      ppn    0.491    0.124      0.181            0.124              0.181
    Sp1       tn    0.206    0.091      0.072            0.091              0.072
    Sp2       bn    0.000    0.316      0.080            0.316              0.080
    ...

Now, I believe that Pr(>|z|) is functionally, but not mathematically equivalent, to my p value]. However, I am unsure whether I should report these values as p= or Pr(>|z|)=, and if the latter, whether it implies that the effect is significant in the same way a p value does. So, based on the fragment of results I posted above, would it be fair to say that:

Sp2 appears to have a highly significant effect on bn counts?

Furthermore, I am just a bit paranoid about these given that the lmer() results suggest that several effects of my independent variable (sp) are highly significant, while no remotely significant effects were found from lme().

Thanks!

Best Answer

This is not a proper answer per se but more a set of comments to your question(s).

  1. People in general should worry so much abour $p$ values and the two links you provide contain nice discussions and references to this.
  2. Your phrase

    Sp2 appears to have a highly significant effect on bn counts

    is worded pretty vaguely since you include "appears to" so I doubt anyone can question that conclusion. You could try out one of the approaches to simulate $p$ values suggested in the discussions you provided. That should minimize your worry if you don't believe the approximations provided by lmer. In fact, unless it takes way too long to run the simulations it might always be a good idea to do that.

  3. Your two models are different so the fact that they are providing different results really shouldn't be a worry: For example, for the Poisson family you have that the variance depends on the mean, but for the Gaussian model the mean and variance are separate. That means that one (or both) of your models could have very wrong estimates for the variances which in turn influences your sensitivity and specificity.
  4. Your R models: In the lme model you have

    lme(value ~ pos, random = ~1 | id2, data = x)
    

    but in the glmer call you have

    glmer(value ~ pos + (1 | pos), family="poisson", data = x)
    

    I don't know anything about your data but shouldn't it be (1|id2) in the call to glmer (or maybe use ~1|pos in the call to lme?) Why should they be different (and if they are different: why are you surprised by the different results)

Related Question