Solved – Testing the variance component in a mixed effects model

asymptoticshypothesis testingmixed modelp-value

Say $y=X\beta+ Zu +\epsilon$ is our mixed effects model where $u=(u_1,..,u_r)$ and $u_{j} \stackrel{i.i.d.}{\sim} N(0, \sigma^2_{a})$ for $j=1,…,r$ and $\epsilon=(\epsilon_1,…,\epsilon_n)$ are i.i.d. $N(0, \sigma^2_{b})$, furthermore $\epsilon_j$ and $u_i$ are also assumed to be independent for all $j$'s and all $i$'s.

I am interested in testing the hypothesis $H_{0}:\sigma_{a}^2=0$ vs $H_{1}: H_{0}$ is not true. The ${\bf lmer}$ package in R does provide a bootstrap p-value for this problem, but my question is: Is there is any non-bootstrap way of obtaining this p-value (asymptotic or otherwise)?

Best Answer

This is usually done with maximum likelihood ratio between original model and a model omitting the variance coefficient to be estimate (random intercept/random slope/random co-variance between slope and intercept).

A good example is in these tutorials:

Sample R code:

> model1 = lmer(resp ˜ fixed1 + (1 | random1))
> model2 = lm(resp ˜ fixed1)
> chi2 = -2*logLik(model2, REML=T) +2*logLik(model1, REML=T)
> chi2
[1] 5.011
> pchisq(chi2, df=1, lower.tail=F)
[1] 0.02518675