Solved – Should I use ANOVA or MANOVA for repeated measures experiment with two groups and several DVs

anovagroup-differencesmanovarepeated measures

I have data on two groups of subjects. Each group was measured three times, and there were four dependent variables all measuring the level of fitness (score of a person's heart rate in five stages of the test, minutes they achieved, distance they achieved and rated perceived exertion).

My hypothesis is that there will be a significant difference in group A's fitness level compared to group B. group A is in advanced training programme and group B is in a 'normal' training programme and their fitness is measured before training, at 4 weeks and at 6 months follow-up.

I'm leaning toward one-way repeated measures ANOVA, but I'm also starting to wonder if I should be using one-way repeated measures MANOVA because I have 4 dependent variables. Or does this not matter? Could I still use repeated measures ANOVA?

Best Answer

It sounds like you're looking to test one basic hypothesis on four similar variables. Setting aside the thought of estimating a latent fitness factor from these four variables, you can approach this with a mixed effects MANOVA. Because you expect fitness in general to change differently for your two groups, and you have four indicators of fitness, you can test your hypothesis of group differences on each of your four dependent variables while controlling for multiple comparisons using MANOVA.

As I understand your problem, you want to include a random effect for individuals measured repeatedly. Your fixed effect is training program. You have four dependent variables. You expect no group differences at the first measurement, but expect group differences at the second, and I'm guessing you expect those differences to be stable until the third measurement. You can test group differences and the group variable's interaction with measurement time in repeated measures ANOVA for each of your four dependent variables, but MANOVA makes the test more conservative by controlling for familywise error rate inflation caused by taking four separate whacks at the hypothesis.

I should warn you that these general linear models are conventionally fitted by ordinary least squares estimation, which may produce biased estimates of standard errors, and therefore biased significance test results if your data don't meet the assumptions...and real data often don't. Also, there's some controversy regarding the utility of controlling for familywise error inflation. If you want to make sure you're choosing the right analysis for your purposes and care to study related issues, @HorstGrünbusch's link to this question is definitely a good one to follow too:

Related Question