Solved – Bartlett’s test vs Levene’s test

anovadata transformationheteroscedasticitylevenes-test

I am currently trying to address violations to ANOVA assumptions. I have used Shapiro-Wilk to test normality, and have dabbled with both Levene's test and Bartlett's test of variance equality. I have since log transformed my data to try and remedy the unequal variances. I reran the Bartlett's test on the log transformed data, and still received a significant p-value, and out of curiosity also ran the Levene's test and got a non-significant p-value. Which test should I rely on?

Best Answer

Probably neither. It would be better to look at your data and see how bad the violations are. Linear models (e.g., ANOVA) are fairly robust to minor violations when the group $n$s are equal. A rule of thumb for heteroscedasticity is that the maximum group variance can be as much as 4 times the minimum group variance without too much damage to your analysis. If you are worried that there may be violations, an even better approach is to simply use analyses that are robust to the possible violations from the start, rather than trying to detect violations and then make decisions based on that1.

For what it's worth, Wikipedia says that Bartlett's test is more sensitive to violations of normality than Levene's test. So you may have non-normal data instead of heteroscedastic data. Again, a more robust analysis may be preferable2.

1. See: A principled method for choosing between t test or non-parametric e.g. Wilcoxon in small samples.
2. For various ways of dealing with problematic heteroscedasticity, see: Alternatives to one-way ANOVA for heteroskedastic data.