Mean Absolute Deviation vs Standard Deviation – Key Differences Explained

distributionsfrequencystandard deviationvariability

In the text book "New Comprehensive Mathematics for O Level" by Greer (1983), I see averaged deviation calculated like this:

Sum up absolute differences between single values and the mean. Then
get its average. Througout the chapter the term mean deviation is
used.

But I've recently seen several references that use the term standard deviation and this is what they do:

Calculate squares of differences between single values and the mean.
Then get their average and finally the root of the answer.

I tried both methods on a common set of data and their answers differ. I'm not a statistician. I got confused while trying to teach deviation to my kids.

So in short, are the terms standard deviation and mean deviation the same or is my old text book wrong?

Best Answer

Both answer how far your values are spread around the mean of the observations.

An observation that is 1 under the mean is equally "far" from the mean as a value that is 1 above the mean. Hence you should neglect the sign of the deviation. This can be done in two ways:

  • Calculate the absolute value of the deviations and sum these.

  • Square the deviations and sum these squares. Due to the square, you give more weight to high deviations, and hence the sum of these squares will be different from the sum of the means.

After calculating the "sum of absolute deviations" or the "square root of the sum of squared deviations", you average them to get the "mean deviation" and the "standard deviation" respectively.

The mean deviation is rarely used.