Mixed-Model – How to Inspect Interaction Between Scenario and Group with Four Levels in Linear Mixed Models

interactionlme4-nlmelsmeansmixed modelmultiple-comparisons

I have data with which I inspect if there is a difference between two scenarios in experienced temporal demands. In addition, there are four different roles (AO, RO, TO, VP) participants are assigned. So, there are two consecutive scenarios (ajo1 & ajo2) (each lasting about an hour with 15 min break in between). Therefore, I am keen to know if the participants in different roles experience different amounts of demands Hence, I make two models (linear mixed models with lmer). In the first model, there is scenario as a fixed effect and participant as a random effect. The second model consists of scenario and role and their interaction as fixed effects and participant a random effect. Now adding the interaction term improves the model significantly. Lmer makes one group (here AO) as the reference group. How do I know what are the changes in the other group in contrast to all other groups? I only know that RO´s slope differs significantly from the reference, but how about the difference in slopes between VP and RO for example? Lastly, when inspecting pairwise differences between the roles per scenario (using lsmeans), the differences are not significant. Should there be some significant difference between at least two roles when a significant interaction is observed?

lmer(Temporal_demands ~ scenario * role + (1|id), data = dat)

enter image description here

enter image description here

Best Answer

The second model consists of scenario and role and their interaction as fixed effects and participant a random effect. Now adding the interaction term improves the model significantly.

Presumably you made that evaluation by an anova() likelihood-ratio test. But inference about "fixed effects" in mixed models isn't quite so straightforward as it is without random effects. For example, likelihood-ratio tests are probably best done on models fit with maximum likelihood rather than the default REML that you evidently used. Even then, the p-values aren't exact. See this answer and this page linked from that answer for further information and alternate approaches.

This thread and its links provide extensive discussion about inference with mixed models. Recognize that any reported p-values are based on particular choices made by the authors of the software package you use.

We'll put aside those problems for the rest of the answer, but you should be aware of them.

How do I know what are the changes in the other group in contrast to all other groups?

Once you have the coefficient covariance matrix (which is provided by the software even if it doesn't always show up in the initial summary), you can examine the variances (and thus the standard errors) of differences between any scenarios that you want by using the formula for the variance of a weighted sum of correlated variables.

For a single scenario, the weight for each predictor in that formula is the product of your specified predictor value and the corresponding regression coefficient.

To test whether the difference between 2 scenarios is 0, you calculate those predictor weights for both scenarios, and use the differences between the predictor weights as the weights in the formula for the variance of the weighted sum.

Software tools can do this for you. You say that you examined lsmeans; the emmeans package is one good choice for such analysis of mixed models. Read the package vignettes to see how to proceed.

Should there be some significant difference between at least two roles when a significant interaction is observed?

The multiple comparisons correction can lead to this situation. There are 6 pairwise comparisons among 4 slopes. There are 28 pairwise comparisons among your 8 combinations of scenarios and roles. The more comparisons you do, the lower the original p-value you need to have to make sure that it doesn't appear "significant" just by chance.

For example, you have a p-value of 0.003408 for one of your coefficients, which seems quite small. But if you do 28 pairwise comparisons, there's a better than 9% chance that you would find at least 1 p-value that low just by chance ($1-(1-0.003408)^{28}=0.091$) even if there are no true differences.

Related Question