Solved – Comparing standard deviations between variables with very different ranges

meanstandard deviationvariance

I have a number of variables for which I want to compare the standard deviations. Each of these variables is actually the data for a particular wavelength of light, and I want to produce a plot showing the standard deviation for each wavelength.

However, the values that I am calculating my SD from are much higher for some wavelengths than others – for example, at 500nm I get values of around 50, and at 1000nm my values are more like 1. In this case, comparing standard deviations seems rather silly as a SD of 5 at 500nm is actually less variable than a SD of 5 at 1000nm.

In this situation would it make sense to divide the SD by the mean to give a sort of percentage difference standard deviation measure? Or is there some other better way (or 'proper') way of doing this? If I were to do this, would it make statistical sense?

Best Answer

What you're thinking about is the coefficient of variation, SD/mean.

Related Question