Solved – Can you accept the alternative hypothesis if the global test is significant but the pairwise comparisons are not

anovahypothesis testingtukey-hsd-test

My hypothesis states that "there would be significant differences among a, b, c and d group on the X measure." I used ANOVA to see the group differences which came out to be significant. After that I applied the Tukey test post-hoc to understand the comparisons further. However, I did not get significant group differences after Tukey. My supervisor says that the hypothesis is partially accepted. Can you explain how or if this statement is correct?

Best Answer

Your ANOVA was significant, implying you either made a Type I error or the means are not all equal (in which case the null is false).

Since the chance of making a Type I error was (presumably) set fairly low, the second option becomes a relatively plausible explanation for the size of the test statistic.

In that sense, the research hypothesis you stated is indicated.

However, your multiple comparisons were unable to clearly identify any specific 'cause' of that difference - likely there are several small effects that are enough for yout to conclude there's a difference, even though none alone are large enough to 'stand out' by themselves for you to say "this pair of groups differ on X".

(Such a thing happens not infrequently, especially when samples size calculations are based on only just achieving a moderate power at some overall effect size. If the effect sizes are all a little smaller than that, you may be unlikely to find them.)

Edit: To address the specific phrasing of the research hypothesis being 'partially accepted' -

It depends on what you mean by "correct".

I would not use such a phrase - either accepting the alternative or 'partial' in reference to it. You rejected the null, and there was nothing partial about that.

I think the important thing is to convey exactly what null was rejected.

I'd also draw clear displays of means and (ANOVA-based) standard errors of the mean (likely along with the raw data on the same display) in order that the effect sizes relative to the uncertainty was clear to the readership.

I certainly have never used such phrasing and don't imagine I ever will, but that doesn't make it objectively wrong. What matters most is that the audience of such a phrase clearly understand the intended meaning.