Since multiple comparison tests are often called 'post tests', you'd think they logically follow the one-way ANOVA. In fact, this isn't so.
"An unfortunate common practice is to pursue multiple comparisons only when the hull hypothesis of homogeneity is rejected." (Hsu, page 177)
Will the results of post tests be valid if the overall P value for the ANOVA is greater than 0.05?
Surprisingly, the answer is yes. With one exception, post tests are valid even if the overall ANOVA did not find a significant difference among means.
The exception is the first multiple comparison test invented, the protected Fisher Least Significant Difference (LSD) test. The first step of the protected LSD test is to check if the overall ANOVA rejects the null hypothesis of identical means. If it doesn't, individual comparisons should not be made. But this protected LSD test is outmoded, and no longer recommended.
Is it possible to get a 'significant' result from a multiple comparisons test even when the overall ANOVA was not significant?
Yes it is possible. The exception is Scheffe's test. It is intertwined with the overall F test. If the overall ANOVA has a P value greater than 0.05, then the Scheffe's test won't find any significant post tests. In this case, performing post tests following an overall nonsignificant ANOVA is a waste of time but won't lead to invalid conclusions. But other multiple comparison tests can find significant differences (sometimes) even when the overall ANOVA showed no significant differences among groups.
How can I understand the apparent contradiction between an ANOVA saying, in effect, that all group means are identical and a post test finding differences?
The overall one-way ANOVA tests the null hypothesis that all the treatment groups have identical mean values, so any difference you happened to observe is due to random sampling. Each post test tests the null hypothesis that two particular groups have identical means.
The post tests are more focused, so have power to find differences between groups even when the overall ANOVA reports that the differences among the means are not statistically significant.
Are the results of the overall ANOVA useful at all?
ANOVA tests the overall null hypothesis that all the data come from groups that have identical means. If that is your experimental question -- does the data provide convincing evidence that the means are not all identical -- then ANOVA is exactly what you want. More often, your experimental questions are more focused and answered by multiple comparison tests (post tests). In these cases, you can safely ignore the overall ANOVA results and jump right to the post test results.
Note that the multiple comparison calculations all use the mean-square result from the ANOVA table. So even if you don't care about the value of F or the P value, the post tests still require that the ANOVA table be computed.
If I were you, I wouldn't bother with any multiple comparisons testing.
Your two-way ANOVA answers all the relevant questions. You've learned that dehydration makes a difference (by P value; you really should quantify how large that difference is and assess whether that difference is large enough to care about). You've also established that there is no evidence that drug A has a different effect than drug B, or that the effect of dehydration differs between A and B (no statistically significant interaction). I don't see any thing else to test, so would ignore the SNK results.
Why are the SNK results different than the ANOVA interaction results? Without seeing the data, of course it is impossible to know. But the SNK test is not highly regarded, and doesn't really control the familywise significance level the way it is supposed to (MA Seaman, JR LEvin and RC Serlin, Psychological Bulletin 110:577-586, 1991).
Best Answer
Be careful not to interpret ANOVA as a test of whether there is at least one different group among the bunch. It is a test with the null hypothesis that the between group variance is 0. A significant result doesn't guarantee some group will deviate significantly from some other group.