I will answer each of your queries in turn.
Is the syntax correctly specifying the clustering and random effects?
The model you've fit here is, in mathematical terms, the model
$$ Y_{ijk} = {\bf X}_{ijk} {\boldsymbol \beta} + \eta_{i} + \theta_{ij} + \varepsilon_{ijk}$$
where
$Y_{ijk}$ is the reaction time for observation $k$ during session $j$ on individual $i$.
${\bf X}_{ijk}$ is the predictor vector for observation $k$ during session $j$ on individual $i$ (in the model you've written up, this is comprised of all main effects and all interactions).
$\eta_i$ is the person $i$ random effect that induces correlation between observations made on the same person. $\theta_{ij}$ is the random effect for individual $i$'s session $j$ and $\varepsilon_{ijk}$ is the leftover error term.
${\boldsymbol \beta}$ is the regression coefficient vector.
As noted on page 14-15 here this model is correct for specifying that sessions are nested within individuals, which is the case from your description.
Beyond syntax, is this model appropriate for the above within-subject design?
I think this model is reasonable, as it does respect the nesting structure in the data and I do think that individual and session are reasonably envisioned as random effects, as this model asserts. You should look at the relationships between the predictors and the response with scatterplots, etc. to ensure that the linear predictor (${\bf X}_{ijk} {\boldsymbol \beta}$) is correctly specified. The other standard regression diagnostics should possibly be examined as well.
Should the full model specify all interactions of fixed effects, or only the ones that I am really interested in?
I think starting with such a heavily saturated model may not be a great idea, unless it makes sense substantively. As I said in a comment, this will tend to overfit your particular data set and may make your results less generalizable. Regarding model selection, if you do start with the completely saturated model and do backwards selection (which some people on this site, with good reason, object to) then you have to make sure to respect the hierarchy in the model. That is, if you eliminate a lower level interaction from the model, then you should also delete all higher level interactions involving that variable. For more discussion on that, see the linked thread.
I have not included the STIM factor in the model, which characterizes the specific stimulus type used in a trial, but which I am not interested to estimate in any way - should I specify that as a random factor given it has 123 levels and very few data points per stimulus type?
Admittedly not knowing anything about the application (so take this with a grain of salt), that sounds like a fixed effect, not a random effect. That is, the treatment type sounds like a variable that would correspond to a fixed shift in the mean response, not something that would induce correlation between subjects who had the same stimulus type. But, the fact that it's a 123 level factor makes it cumbersome to enter into the model. I suppose I'd want to know how large of an effect you'd expect this to have. Regardless of the size of the effect, it will not induce bias in your slope estimates since this is a linear model, but leaving it out may make your standard errors larger than they would otherwise be.
If I understand your question correctly, you can specify your model with nested random effects like this:
fit.1 <- lme(Change ~ Dose*Time, random=~1|ID/Dose, data=mydata)
To specify the covariance structure, e.g. a simple compound symmetry form, try this:
fit.2 <- lme(Change ~ Dose*Time, random=~1|ID/Dose, data=mydata, cor=corCompSymm())
To look at the estimated parameters try:
summary(fit.1)
To get all estimated coefficients try:
coef(fit.1)
To get the p-values then use:
anova(fit.1)
Notice that if you need to specify the covariance structure of the residuals, you'll have to use nlme
as although lme4
(i.e. the lmer
function) is a more advanced package, currently it does not support that feature.
Best Answer
This is just a general comment on pseudoreplication discussions.
Many of the discussions and queries regarding pseudoreplication in the current literature and on the internetrefer only my initial 1984 paper and seem unaware of many later clarifying papers by myself and my colleagues that focus partly or completely on the topic. These are listed below. Pdfs of most of these can be accessed at my university website, at http://www.bio.sdsu.edu/pub/stuart/stuart.html
Reading of these may be helpful to researchers. It is regrettable that confusing or simply fallacious re-definitions of the “sin” are so prevalent in articles, books, and on the internet. Be careful who you accept as your “statistical gurus” and of all that you see on the glossy pages of “reputable” journals!
Hurlbert, S.H. 1990. Pastor binocularis: Now we have no excuse [review of Design of Experiments by R. Mead]. Ecology 71: 1222-1228.
Hurlbert, S.H. and M.D. White, 1993. Experiments with invertebrate zooplanktivores: Quality of statistical analyses. Bulletin of Marine Science 53:128-153. PDF
Hurlbert, S.H. 1993. Dragging statistical malpractice into the sunshine [Citation Classic: Pseudoreplication and the design of ecological field experiments]. Current Contents 1993:18.
Lombardi, C.M. and S.H. Hurlbert, 1996. Sunfish cognition and pseudoreplication. Animal Behaviour 52:419-422 PDF
Hurlbert, S.H. and W.G. Meikle. 2003. Pseudoreplication, fungi, and locusts. Journal of Economic Entomology 96:533-535.
Hurlbert, S.H. 2003. On misinterpretations of pseudoreplication and related matters: a reply to Oksanen. Oikos 104:591-597. PDF
Hurlbert S.H. and C.M. Lombardi. 2004. Research methodology: experimental design, sampling design, statistical analysis. In M.M. Bekoff, (ed.), Encylopedia of Animal Behavior, 2:755-762. Greenwood Press, London. PDF
Kozlov, M. and S.H. Hurlbert. 2006. Pseudoreplication, chatter, and the international nature of science. Journal of Fundamental Biology 67(22):128-135. [In Russian; English translation available as pdf]. PDF
Hurlbert, S.H. 2009. The ancient black art and transdisciplinary extent of pseudoreplication. Journal of Comparative Psychology 123:434-443 PDF
Hurlbert, S.H. 2010. Pseudoreplication capstone: Correction of 12 errors in Koehnle & Schank (2009). Department of Biology, San Diego State University, San Diego, California. 5 pp. PDF
Hurlbert, S.H. 2013. Pseudofactorialism, response structures and collective responsibility. Austral Ecology 38: 646-663. PDF + suppl. inform. PDF
Hurlbert, S.H. 2013. Affirmation of the classical terminology for experimental design via a critique of Casella's Statistical Design. Agronomy Journal 105: 412-418 + suppl. inform. PDF
Hurlbert, S.H. 2013. [Review of Biometry, 4th edn, by R.R. Sokal & F.J. Rohlf]. Limnology and Oceanography Bulletin 22(2): 62-65. PDF