For my current project I need to compare means of four groups by one-way ANOVA. In order to test whether my data come from normal distribution or not, I have checked each group for normality by Shapiro-Wilk test and now I have four p-values, i.e. one p-value per group. Should I apply Bonferroni correction to these p-values?
Solved – Testing for normality and Bonferroni correction
bonferronimultiple-comparisons
Related Solutions
If you wish to control the familywise type I error rate, then you need to adjust for multiplicity. In particular, if you wish to emphasize p-value, statistical significance or whether CIs exclude some value, to claim that some of multiple comparisons you are looking at are "statistically significant", then that is often a situation where that might be something you wish to do.
The Bonferroni correction is one of the most simple (and most conservative) adjustments for multiplicity. You can easily adjust your CIs to match the adjustment (e.g. with 2 hypotheses you calculate 97.5% confidence intervals instead of 95% CIs). Other adjustments are uniformly more powerful (e.g. Bonferroni-Holm), but make it hard to find matching CIs.
There are of course approaches for dealing with multiplicity other than controlling the familywise type I error rate, e.g. using shrinkage in Bayesian hierarchical models instead.
Maybe I'm getting it wrong, but it seems that both methods return the same corrected p-values because the three original p-values are equal.
For the Bonferroni correction, you simply multiply each p-value by the number of p-values (here by $3$).
For the Holm-Bonferroni, first you need to sort the p-values and then multiply the smallest by $3$, then the second one by $2$ etc.
But if at one step the corrected p-value is smaller than the previous one, then it is made equal to it (this sentence is only based on empirical constatations using the p.adjust
function).
For example, say your (sorted) p-values are $(0.1,0.11,0.5)$.
The first one is multiply by $3$: $0.1 \leftarrow 0.3$, the second one by $2$: $0.11 \leftarrow 0.22$.
But since $0.22 < 0.3$, the corrected p-value for the second one will be $0.3$. For the third you multiply by $1$.
The corrected p-values are then $(0.3,0.3,0.5)$.
I think this is what happens here, since all the p-values are equal you have the first one multiplied by $3$ which gives $0.147$, the second one by $2$ which gives $0.098$. Since $0.098 < 0.147$, actually the corrected p-value for the second one will be $0.147$. The same goes for the third one.
Then all the three corrected p-values will be the same (and equal to those given by the Bonferroni correction since the have been multiplied by $3$).
If you use different p-values, the Holm method should give you something different than the Bonferroni method.
Best Answer
Bonferroni is used to control false discoveries (Type I errors). Your 4 p-values, if I'm interpreting your question correctly, are from assumption tests, not from tests to demonstrate the significance of your discoveries, and therefore don't call for Bonferroni correction. I doubt that you are trying to demonstrate non-normality, or that you would claim a "discovery" for detecting non-normality. In fact, the goal of assumption tests is typically NONSIGNIFICANCE, not significance. Therefore, there is no reason to apply Bonferroni correction to assumption tests under typical circumstances.