Solved – R lmerTest and Tests of Multiple Random Effects

likelihood-ratiolme4-nlmemixed model

I'm curious about how lmerTest package in R, specifically the "rand" function, handles tests of random effects. Consider the example from the lmerTest pdf on CRAN that uses the built in "carrots" data set:

#import lme4 package and lmerTest package
  library(lmerTest)
#lmer model with correlation between intercept and slopes
#in the random part
  m <- lmer(Preference ~ sens2+Homesize+(1+sens2|Consumer), data=carrots)
# table with p-values for the random effects
  rand(m)

The model specifies two random variances (the intercept and "sens2"), both nested in "Consumer," and the covariance between the intercept and "sens2." Output (not provided in the pdf) for the random components from the lmer run follows:

Random effects:
Groups   Name        Variance Std.Dev. Corr
Consumer (Intercept) 0.195168 0.44178      
         sens2       0.002779 0.05271  0.18
Residual             1.070441 1.03462      
Number of obs: 1233, groups:  Consumer, 103

Which is expected given the model specification. The output from the rand function follows:

Analysis of Random effects Table:
                 Chi.sq Chi.DF p.value  
sens2:Consumer   6.99      2    0.03 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Given the random effects table, I think lmerTest is evaluating the random slope for "sens2" but it might also be the covariance between the slope and intercept. The test for the random intercept is not included. I estimated another model with only the random intercept (no random slope or covariance), and got the following from the "rand" statement:

Analysis of Random effects Table:
           Chi.sq Chi.DF p.value    
Consumer   79.6      1  <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

The test for the random variance associated with the intercept is provided here. So, does anyone know what the test of the random variance components from the first model is testing? Is there a way that I can't see from the documentation to test all three of the random components? I should mention the page for the rand test at inside-R.org has the following confusing description (which I don't see in the pdf on CRAN):

Values
Produces a data frame with tests for the random terms being non-significant.

Note
If the effect has random slopes, then first the correlations between itercept [sic] and slopes are checked for significance

Is it possible the "Values" description has it backwards and that only significant effects are reported? I ran the "step" procedure and it wasn't clear if all three random variance components were considered in the run.

Any insight on the matter is greatly appreciated.

Joe

EDIT: The plot thickens a bit. It dawned on me to check a "diagonal" covariance structure (no covariance between the random intercept and slope) by using the following (based on this excellent post):

m2 <- lmer(Preference ~ sens2+Homesize+(1|Consumer)+(0+sens2|Consumer), data=carrots)

The lmer output for the random variances, using VarCorr, is as follows:

Groups     Name        Std.Dev.
Consumer   (Intercept) 0.441807
Consumer.1 sens2       0.052719
Residual               1.034618

Which correctly omits the covariance (correlation) between the random slope and intercept. Running the "rand" function from lmerTest produces the following output:

Analysis of Random effects Table:
                 Chi.sq Chi.DF p.value    
Consumer         84.4      1  <2e-16 ***
sens2:Consumer    6.3      1    0.01 *  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

So it will test the two variance components for this model. But the question remains regarding the first model with the random covariance. What is lmerTest testing?

Best Answer

The documentation on the lmerTest::rand() function is definitely terse.

From what I've been able to gather I think it tests the hypothesis that the random effect variation (i.e., a varying intercept (1 | Consumer) ) is significant versus the null that there is no between group-level variation, $H_0 : \sigma_{\alpha}^2 = 0$, where $\alpha_{j[i]} \sim N(\mu_\alpha, \sigma_\alpha^2)$ for $j = 1, \ldots, J$ is the group indicator. (I'm following Gelman & Hill's (2007) notation, see ch. 12).


I'm no expert so the code confused me a bit. Specifically, I'm not clear on what the elimRandEffs function does but I'd guess it's converting $\alpha_{j[i]}$ to a fixed (that is pooled) term $\alpha$ and then comparing this to the original model. Hopefully someone with greater knowledge can clarify this.

On the theoretical side, rand must be performing something like the test proposed in Lee and Braun 2012. However, unlike their extension to testing $0<r \leq q$ random effects at a time (sec 3.2), the rand output appears to test only one random effect at time. A simpler summary of the same idea is in slides 10-12 found here.


So your intuition that the "lmerTest is evaluating the random slope for 'sens2' [and] might also be the covariance between the slope and intercept" is correct in that rand tests if the random effect variances are significantly different from zero.

But it's incorrect to say that "the test for the random intercept is not included". The RE in your first specification:

 (1 + sens2 | Consumer) 

assumes non-zero correlation between the intercept and slope, meaning they vary together and so rand() tests that specification against a no RE model (i.e., reducing to the classical regression).

The second specification

 (1  | Consumer) + (0 + sens2 | Consumer) 

gives two lines of output because the random effects are additively separable. Here rand tests (in the 1st output row) a model with a pooled/fixed intercept with a random slope against your specification. In the 2nd row, the test is against a pooled sloped with a random intercept. So, like the step function, rand tests independent RE's one at a time.

I'm still puzzled by inside-R.org note that

  Note
  If the effect has random slopes, then first the correlations between itercept [sic] and slopes are checked for significance

That's not in the package documentation, so I don't know where it came from nor where such a test would be found in the output.

EDIT

I think I'm wrong about the null model in a correlated slope/intercept model as in the first specification. The step documentation says:

in the random part if correlation is present between slope and intercept, the simplified model will contain just an intercept. That is if the random part of the initial model is (1+c|f), then this model is compared to (1|f) by using LRT.

I imagine the same principle is at work for rand.

Related Question