Solved – Variance explained by random effects using lme4

glmmrvariance

I am using the glmer() function from the lme4 package to run a GLMM using the poisson distribution. In all the examples that I see, the random effects part of the output has a residual part that has been estimated from the data (surrounded by 2 asterisks on either side in the example below). This information can then be used in interpreting the amount of variation explained by the random effect. Here is an example:

> summary(M1)
Linear mixed model fit by REML 
Formula: Richness ~ NAP * fExp + (1 | fBeach) 
Data: RIKZ 
AIC   BIC    logLik   deviance REMLdev
236.5 247.3 -112.2    230.3    224.5
Random effects:
Groups   Name          Variance Std.Dev.
fBeach   (Intercept)   3.3072   1.8186  
**Residual             8.6605   2.9429**
Number of obs: 45, groups: fBeach, 9

Fixed effects:
              Estimate Std. Error t value
(Intercept)   8.8611     1.0208   8.681
NAP          -3.4637     0.6279  -5.517
fExp11       -5.2556     1.5451  -3.401
NAP:fExp11    2.0005     0.9461   2.114

Correlation of Fixed Effects:
           (Intr) NAP    fExp11
NAP        -0.181              
fExp11     -0.661  0.120       
NAP:fExp11  0.120 -0.664 -0.221

However, when I use my own data, I get output that does not include this information, and I am not sure why. I want to know how much variation is explained by my random effects, but can't figure out how to access the information necessary to answer the question. Any clues? Is this a data/statistics issue or is this a knowing how to access the information issue? I apologize if I'm asking in the wrong place. The output I get looks similar to the following output:

Generalized linear mixed model fit by the Laplace approximation 
Formula: y ~ z.score(x1) + z.score(x2) + z.score(x3) + z.score(x4) + z.score(x5) +      z.score(x6) + (1 | RE) 
Data: p 
AIC   BIC logLik deviance
419.5 454.7 -201.8    403.5
Random effects:
Groups Name        Variance Std.Dev.
RE     (Intercept) 0.021605 0.14699 
Number of obs: 600, groups: RE, 40

Fixed effects:
                  Estimate   Std. Error z value Pr(>|z|)    
(Intercept)       1.70591    0.02911    58.60   < 2e-16 ***
z.score(x1)       0.19087    0.03595    5.31    1.10e-07 ***
z.score(x2)      -0.14302    0.04083   -3.50    0.000460 ***
z.score(x3)      -0.16562    0.04020   -4.12    3.79e-05 ***
z.score(x4)       0.13229    0.03425    3.86    0.000112 ***
z.score(x5)      -0.10588    0.03985   -2.66    0.007885 ** 
z.score(x6)       0.17600    0.05798    3.04    0.002401 ** 
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

Correlation of Fixed Effects:
            (Intr) z.(x1) z.(x2 z.s(x3) z.(x4 z.(x5
z.scr(x1)  -0.051                                   
z.s(x2)     0.038  0.259                            
z.scr(x3)   0.045  0.156  0.113                     
z.(x4      -0.040  0.144 -0.052  0.044              
z.(x5       0.026 -0.368 -0.339 -0.072 -0.073       
z.scor(x6) -0.031 -0.020  0.002 -0.143 -0.004  0.004

Here is some sample data, to be fit with glmer(y ~ x1 + (1|RE), data=d, family=poisson).

d <- data.frame(
  y  =  c(3, 5, 2, 6, 3, 7, 2, 3, 0, 4, 0,10, 1, 4, 0, 4, 2, 3, 0, 6, 
          3, 4, 2, 3, 2, 3, 3, 4, 0, 5, 6, 5, 4, 4, 0, 3, 1, 6, 0, 3, 2, 
          2, 1, 6, 2, 7, 0, 2, 0, 4, 0, 6, 4, 5, 1, 5, 1, 4, 1, 2, 3, 6, 
          6, 7, 0, 5, 0, 9, 1, 4, 5, 6, 1, 7, 1, 4, 1, 4, 0, 4, 1, 6, 1, 
          4, 0, 7, 1, 4, 0, 6, 0, 7, 2, 6, 0, 6, 1, 5, 0, 4, 1, 7, 2, 4, 
          1, 5, 1, 7, 2, 5, 0, 4, 3, 5, 1, 4, 0, 3, 0, 6, 0, 8, 3, 9, 0, 
          2, 3, 8, 0, 1, 0, 3, 0, 5, 0, 4, 4, 5, 0, 5, 1, 5, 3, 5, 1, 4, 
          3, 4, 4, 4, 4, 4, 4, 7, 1, 8, 1, 4, 0, 2, 2, 5, 1, 4, 1, 5, 1, 
          4, 2, 4, 2, 4, 0, 6, 1, 6, 0, 6, 1, 2, 1, 3, 1, 8, 1, 6, 1, 6, 
          0, 6, 1, 6, 2, 6, 2, 4, 0, 1, 1, 1, 1, 6, 5, 5, 1, 5, 2, 4, 2, 
          6, 1, 7, 1, 8, 2, 8, 1, 8, 2, 4, 1, 7, 3, 6, 4, 7, 3, 7, 1, 6, 
          3, 5, 1,10, 1, 7, 2, 5, 1, 5, 0, 6, 1, 8, 4, 7, 1, 6, 1, 9, 
          0, 9, 1, 3, 2, 5, 2, 9, 3, 5, 0, 2, 2, 3, 0, 5, 0, 5, 0, 4, 3, 
          6, 1,10, 2, 8, 0, 6, 0, 4, 2, 6, 2, 4, 2, 6, 1, 4, 0, 5, 2, 
          6, 1, 5, 2, 5, 1, 5, 1, 5),
  x1 = rep(c(0.1008, 0.0511, 0.1792, 1.0345), c(80, 80, 80, 60)),
  RE = rep(c(37, 88, 139, 190, 241, 292, 343, 394, 91, 142, 193, 244, 295, 
             346, 397, 40, 94, 145, 43, 196, 247, 298, 349, 400, 301, 352, 
             403, 250, 148, 199, 46, 97, 355, 406, 253, 304, 49, 100, 151, 
             202, 37, 88, 139, 190, 241, 292, 343, 394, 91, 142, 193, 244, 
             295, 346, 397, 40, 43, 94, 145, 247, 298, 349, 196, 400, 199, 
             250, 301, 352, 406, 46, 97, 148, 403, 49, 100, 151, 202, 253, 
             304, 355, 37, 88, 139, 190, 241, 292, 343, 394, 193, 244, 346, 
             397, 295, 40, 91, 142, 43, 94, 145, 196, 46, 97, 148, 151, 247, 
             400, 298, 349, 352, 199, 250, 301, 403, 253, 304, 355, 202, 406, 
             49, 100, 37, 88, 139, 190, 241, 292, 343, 394, 346, 397, 193, 
             244, 295, 40, 91, 142, 43, 94, 145, 196, 247, 298, 349, 400, 
             97, 148, 46, 199, 250, 301), each=2)
)

Best Answer

Such a value doesn't exists for a GLMM. The model you show that does have a Residual component is a LMM not a GLMM. In a GLMM there is a known mean-variance relationship and there isn't a parameter $\sigma$ to estimate. You can compute the residual deviance but this doesn't fit into the scheme of being a variance parameter (and hence can not be squared to give a standard deviation). That would be sufficient for it not to be shown in the output from the GLMM.