Solved – Bonferroni Adjustment and Assumptions

assumptionsbonferronipsychology

I'd just like to get clarification on something. When you perform a bonferroni adjustment (dividing the alpha level by the amount of tests you want to do, if say you're doing multiple ANOVAs) do you just check the assumptions as recommended in whatever guide you're following with no adjustments/alterations, and ONLY apply the bonferroni adjusted alpha level to your main outcomes sig values?

Or, do you have to do/apply anything to the assumption test sig values in your analysis?

For example, for the assumption tests for ANOVA of Levene’s Test of Equality of Error Variances (wanting them to be non-significant / over .05), or Box’s M statistic ( wanting them to be non-significant / over .001) do you use the divided alpha for them? Or do you divide them by the amount of tests your doing if they're already lower than the usual .05 (as in Box's M's 0.001)? Or do you multiply them by the amount of tests your doing? I can't remember if there are any statistical assumption tests that necessitate a significant result (in my case I don't think there are, but for those who may be wondering a similar thing in that instance), but the same question applies to those too.

If you could provide links to simple explanations, or simple explanations, and ideally links to text/references/sources to cite, that would be much appreciated.

No specific degrees in maths or stats, so the simpler the better.

Best Answer

When one analyses the proof that Bonferroni controls the type I error ''family-wise'' then you see that no assumptions are needed; it basically uses only the inequality of Boole. So Bonferroni does not need e.g. an independence assumption.

However, the analysis of the proof learns that the probability of a type I error is at most $\alpha$, i.e. the Bonferroni method can have a type-I error probability that is stricly smaller than $\alpha$ (and this will result in a loss of power).

The cases where the probability of a type I error probability is strictly smaller than $\alpha$, (one says that in these cases Bonferroni is conservative) occur when the tests are dependent or when the p-values of the indiviual tests are themselves conservative. The latter can be the case for discrete random variables (in a univariate test for a Binomial variable e.g. the ''observed'' type I error probability may be strictly smaller than $\alpha$). .

Note that Holm's method also controls type I error probability and its power is at least as good as the one of the Bonferroni method. For discrete random variables, like the Binomial, other multiple test correction methods methods have been shown to be more powerful (e.g. minP).

So to summarise: the Bonferroni method does not need any additional assumptions to show that the type-I error probability is controled, however, it can be conservative.