I have two groups of 10 participants who were assessed three times during an experiment. To test for differences between groups and across the three assessments, I ran a 2×3 mixed design ANOVA with group
(control, experimental), time
(first, second, three), and group x time
.
Both time
and group
resulted significant, besides there was a significant interaction group x time
.
I don't know very well how to proceed to further check for the differences between the three times of assessments, also respect to group membership. In fact, at the beginning I only specified in the options of the ANOVA to compare all the main effects, using the Bonferroni's correction. However, then I realized that this way they were compared the differences in time of the total sample, without group distinction, am I right?
Therefore, I searched a lot on the internet to find a possible solution, but with scarce results. I only found 2 cases similar to mine, but their solutions are opposite!
- In an article, after the mixed design, the authors ran 2 repeated measures ANOVA as a post-hoc, one for each group of subjects. This way, the two groups are analysed separately without any correction, am I right?
- In a guide on the internet, they say to add manually in the SPSS syntax
COMPARE(time) ADJ(BONFERRONI)
, just after/EMMEANS=TABLES(newgroup*time)
, while running the mixed ANOVA. This way, the three times are compared separately for each group, with Bonferroni correction, am I right?
What do you think? Which would be the correct way to proceed?
Best Answer
Answer edited to implement encouraging and constructive comment by @Ferdi
I would like to:
I assume to have a database with columns: depV, Group, F1, F2. I implement a 2x2x2 mixed design ANOVA where depV is the dependent variable, F1 and F2 are within subject factors and Group is a between subject factor. I further assume the F test has revealed that the interaction Group*F2 is significant. I therefore need to use post hoc t-tests to understand what drives the interaction.
In particular the second t-test corresponds to the one performed by the EMMEANS command. The EMMEANS comparison could reveal for example that depV was bigger in Group 1 on the condition F2=1.
However the interaction could also be driven by something else, which is verified by the first test: the difference depV(F2=1)-depV(F2=0) differs between groups, and this is a contrast you cannot verify with the EMMEANS command (at least I did not find an easy way).
Now, in models with many factors it is a bit tricky to write down the /TEST line, the sequence of 1/2, 1/4 etc, called L matrix. Typically if you get the error message: "the L matrix is not estimable", you are forgetting some elements. One link that explains the receipt is this one: https://stats.idre.ucla.edu/spss/faq/how-can-i-test-contrasts-and-interaction-contrasts-in-a-mixed-model/