Say $y=X\beta+ Zu +\epsilon$ is our mixed effects model where $u=(u_1,..,u_r)$ and $u_{j} \stackrel{i.i.d.}{\sim} N(0, \sigma^2_{a})$ for $j=1,…,r$ and $\epsilon=(\epsilon_1,…,\epsilon_n)$ are i.i.d. $N(0, \sigma^2_{b})$, furthermore $\epsilon_j$ and $u_i$ are also assumed to be independent for all $j$'s and all $i$'s.
I am interested in testing the hypothesis $H_{0}:\sigma_{a}^2=0$ vs $H_{1}: H_{0}$ is not true. The ${\bf lmer}$ package in R does provide a bootstrap p-value for this problem, but my question is: Is there is any non-bootstrap way of obtaining this p-value (asymptotic or otherwise)?
Best Answer
This is usually done with maximum likelihood ratio between original model and a model omitting the variance coefficient to be estimate (random intercept/random slope/random co-variance between slope and intercept).
A good example is in these tutorials:
Sample R code: