Solved – Correcting for multiple comparisons on planned contrasts with an ANCOVA

adjustmentancovamultiple-comparisonsplanned-comparisons-test

I have done an experiment with 7 treatment conditions designed to reduce alcohol intake, and a control condition. The dependent variable is alcohol intake after treatment, and I have run an ANCOVA with baseline alcohol intake as a covariate.

I found no effect from the omnibus ANCOVA F-test, but wanted to investigate specific comparisons between the treatment conditions (which was the main aim of the research).

I therefore used planned contrasts to look at two specific questions:

  • The average of all treatment conditions vs. control condition = no effect.

  • Treatment 1 vs. all the other treatments individually.

    e.g. treatment 1 vs. treatment 2,
    treatment 1 vs treatment 3,
    treatment 1 vs treatment 4 etc.

I have found a significant difference between treatment 1 and treatment 6 (which I had hypothesised – but not based on strong evidence, more of a hunch)

However, I have not applied any correction for an inflated type 1 error rate for doing these multiple comparisons. Everything I've read on this topic so far has been pretty vague as to how to use corrections, e.g. "oh its ok as long as you're only doing 'a few' planned contrasts or if you have an a priori reason for it" – but thats not very helpful, how many is 'a few' ?

My question is: Do you think I need to apply an adjustment for multiple comparisons (I have done 7 in total)
and if so, which adjustment? I've read that you can only use certain adjustments when using an ANCOVA (as opposed to an ANOVA). Dunnett's test looks like it could be right, but I don't know if it is possible to use Dunnett's test with an ANCOVA?

Any help (or even pointing me in the direction of some more useful reading) would be much appreciated.

Best Answer

You can use package lsmeans or the multcomp's package glht function to calculate posthoc corrected tests across treatment conditions. Syntax for lsmeans is something like

summary(contrast(lsmeans(fit, list(~treatment)), method="trt.vs.ctrl", adjust="dunnett"))

to get Dunnett's posthoc corrected p values for repeated comparison with your control reference level or

summary(contrast(lsmeans(fit, list(~treatment)), method="trt.vs.ctrl", adjust="tukey"))

to get Tukey corrected pairwise comparisons among all your treatment factor levels...

Personally I think it's always a good idea to apply corrections for multiple testing in all cases where you are repeatedly testing the same hypothesis. But as you say planned contrasts are sometimes taken as an exception, where some people don't apply such a correction. Think it's a bit of a grey zone...

You can also get least square means for your different factor levels using

confint(lsmeans(fit, list(~treatment))
Related Question