Section 2.2.2.1 from lme4 book
Because each level of sample occurs with one and only one level of batch we
say that sample is nested within batch. Some presentations of mixed-effects
models, especially those related to multilevel modeling˜[Rasbash et˜al., 2000]
or hierarchical linear models˜[Raudenbush and Bryk, 2002], leave the impression
that one can only define random effects with respect to factors that
are nested. This is the origin of the terms “multilevel”, referring to multiple,
nested levels of variability, and “hierarchical”, also invoking the concept of
a hierarchy of levels. To be fair, both those references do describe the use
of models with random effects associated with non-nested factors, but such
models tend to be treated as a special case.
The blurring of mixed-effects models with the concept of multiple, hierarchical
levels of variation results in an unwarranted emphasis on “levels”
when defining a model and leads to considerable confusion. It is perfectly legitimate
to define models having random effects associated with non-nested
factors. The reasons for the emphasis on defining random effects with respect
to nested factors only are that such cases do occur frequently in practice and
that some of the computational methods for estimating the parameters in
the models can only be easily applied to nested factors
Stan is the state-of-the-art in Bayesian model fitting. It has an official R interface through rstan
. With rstan
you would need to learn how to write your models in the Stan language. Alternatively, Stan also provides the rstanarm
package (hat-tip to @ben-bolker for pointing out the omission), through which you can write your models in the familiar lme4
-style syntax.
An equally user-friendly interface for Stan is the R package brms
which is in addition very flexible to handle models that should satisfy basic and moderately advanced users.
For example, in your case the syntax would be exactly the same:
m <- brm(Shop ~ Time + Group + Time:Group + (1 | subj),
data = Shopping, family = binomial)
or more concretely (same would work with glmer
as well)
m <- brm(Shop ~ Time*Group + (1 | subj),
data = Shopping, family = binomial)
This model in brms
will assume reasonable defaults for the prior distributions but you are encouraged to select your own.
The syntax for basic models such as the one you give as an example is going to be the same between rstanarm
and brms
. The advantage of using rstanarm
to fit these basic models is that it comes with pre-compiled Stan code so it is going to run faster than brms
that needs to compile its Stan code for every model. To name a few distinguishing features, brms
shines due to its extended support for different distributions (e.g. "zero-inflated beta", "von Mises", categorical), its extended syntax to cover cases where the user needs to model e.g. predictor or outcome (as in meta-analyses) measurement error, and its ability to fit distributional regressions, non-linear models, or mixture models. For a more extensive comparison of R packages for Bayesian analysis have a look at Bürkner 2018.
Since you are a newcomer to Bayesian models, I would also highly encourage you to read the book "Statistical Rethinking" which also comes with its own R package, rethinking
that is also an excellent choice, although not as remarkably user-friendly and flexible as brms
. There's even a version of the book adapted for brms
.
**References**
[Paul-Christian Bürkner, The R Journal (2018) 10:1, pages 395-411.][1]
Best Answer
lme4
is fully frequentist, whilerstanarm
is fully Bayesian. That means there are more differences than just whether a prior is used. For example:rstanarm
reports marginal medians of the posterior density for each parameter, whilelme4
reports maximum likelihood estimates (approximately analogous to the maximum a posteriori (MAP) estimator, or mode of the posterior distribution, given uninformative priors - but see this CV answer for discussion of why this is a loose analogy)rstanarm
reports posterior intervals based on quantiles of the marginal posterior distribution (not the more classical highest posterior density intervals),lme4
reports Wald standard errors or likelihood profile confidence intervalsFor what it's worth,
brms
, also based on Stan, implements a broad class of GLMMs (somewhat broader thanrstanarm
, I think)MCMCglmm
implements a broad class of Bayesian mixed models (based on older MCMC approaches rather than Hamiltonian MC)blme
package implements a partly Bayesian approach to mixed models that allows for weakly or strongly informative priors, but reports MAP estimates (it builds onlme4
's technology)R-INLA
package (not on CRAN) uses integrated nested Laplace approximations; it also allows priors and returns MAP estimates