Solved – Three-way mixed design ANOVA or two two-way mixed design ANOVAs

manovapost-hocspss

I am desperately hoping for your quick help…
Would you please cross-validate 😉 my statistical proceedure and help me solve a huge problem (see c)??

So imagine there is an experiment to test a treatment and a control group. It consists of

a) behavioural tests (Test 1, Test 2)

b) questionnaires (metric scores were derived, so Q1, Q2, Q3…)

c) EEG-experiment with a test with two conditions within it (A and B)

The question is, whether the groups differ, and if, where exactly.

So to test this

for a)
I can use for each test (1 and 2) either independent t tests (for normally distributed values) or Mann-Whitney U test (for not normally distributed values), with p-values corrected (Sidak)

OR
should I use two-way ANOVAs for each test (values as within, group as between factor)?

for b)
same as a) ?

for c)
If I was to concider 2 electrodes (since I do not know where exactly to expect the effects) and have 2 conditions (A and B) in two groups (treated vs control) do I choose
the three-way mixed design ANOVA (2x2x2)?

OR
may I concider each electrode seperately to run a two-way mixed design ANOVA
with condition as within and group as between factors. Or would that yield an alpha-error accumulation?

May I do post hoc tests if there is no three-way interaction, but only two-way?
And when I do any of these, SPSS refuses to do post-hoc tests ("there are not more than two groups"), so do I use independent t tests for any significant interactions?

I am aware that it is a lot, but hope you might be so kind to help me!!

Thank you in advance

Best Answer

I think the choice to use a full factorial or separate tests depends mostly on what is common practice in your field. But I can also give some recommendations:

A and B: I don't think it would be a good idea to run an ANOVA with different tests or questionnaires unless they ask the same thing with a different purpose. If they test for different things, then they should be analysed separately. If they test for the same thing, but induce a pseudo-condition by the way they are conducted, then you could justify a full-factorial design.

C: A very reasonable approach would be to analyse each electrode separately as a 2-way ANOVA. You could even adjust the hypotheses for the fact that you ran two tests "not knowing where the effect would be." However, if the two placements were used based on some hypothesis, then you could justify not adjusting. However, if you do want to test the 3 way interaction (does the treatment have a different effect depending upon the condition and does this vary with electrode) then you must do the full 3-way analysis.

I don't know how SPSS works, but if you have a significant 2-way interaction, then you should be able to do "simple effects" testing to determine what condition or electrode the treatment effect differed.

Without an interaction and only a main effects model, then no, you don't need post-hoc tests because there are only two levels, so it is testing for the difference in means between the two levels of each factor.

Finally, it should also be said that if you did not fully randomise when your levels were given(i.e. all of test 1 or all of Q1 or condition A was tested first) then you cannot put all of them into one ANOVA anyway because the different effects between tests/Qs/conditions would be confounded with time/learning of the procedure.

Related Question