Solved – “Normalized” standard deviation

standard deviation

I posted a question earlier but failed miserably in trying to explain what I am looking for (thanks to those who tried to help me anyway). Will try again, starting with a clean sheet.

Standard deviations are sensitive to scale. Since I am trying to perform a statistical test where the best result is predicted by the lowest standard deviation amongst different data sets, is there a way to "normalize" it for scale, or use a different standard-deviation-type test altogether?

Unfortunately dividing the resulting standard deviation by the mean in my case does not work, as the mean is almost always close to zero.

Thanks

Best Answer

If all your measurements are using the same units, then you've already addressed the scale problem; what's bugging you is degrees of freedom and precision of your estimates of standard deviation. If you recast your problem as comparing variances, then there are plenty of standard tests available.

For two independent samples, you can use the F test; its null distribution follows the (surprise) F distribution which is indexed by degrees of freedom, so it implicitly adjusts for what you're calling a scale problem. If you're comparing more than two samples, either Bartlett's or Levene's test might be suitable. Of course, these have the same problem as one-way ANOVA, they don't tell you which variances differ significantly. However, if, say, Bartlett's test did identify inhomogeneous variances, you could do follow-up pairwise comparisons with the F test and make a Bonferroni adjustment to maintain your experimentwise Type I error (alpha).

You can get details for all of this stuff in the NIST/SEMATECH e-Handbook of Statistical Methods.