[Math] Standard Deviation vs Standard Error

data analysisstandard deviation

I am confused about the difference between standard deviation and error. When do I use which? From my lecture:

  • Standard deviation quantifies spread of data about mean
  • Standard error measures spread of all means

I actually learn this from my physics data analysis lesson. So suppose I have a set of measurements of distance ($D$) verses time ($t$) and I want to compute avg velocity:

$t_{avg}$ is calculated as $(t_1 + t_2 + t_3)/3$ (of course I used excel's AVERAGE here). Then avg velocity was $t_{avg}/D$.

Then I was asked which measurement #1-10 I think is the closest approximate the the actual velocity. So I was thinking I find the standard error and the one with the lowest is the closest approximate? Is this the reasonable thing to do? I found standard error by STDEV(t_1, t_2, t_3)/SQRT(3). Is this correct?

Heres my confusion: STDERR is defined as the spread of all means. I am calculating the spread of all measured data? Not the mean?

Best Answer

In the context of your example: Does it really make a difference which you use? If standard deviation is 's'. The standard error is just s/SQRT(n). This is a linear transformation; therefore for the purpose of comparison it makes no difference which you use.

Now, for the purpose of making a statistical test. If you calculate a group of 'sample means' all independent and identically distributed. Then that sample of 'sample means' would have standard deviation given by s/SQRT(n).

Its intuitive that using a sample mean would give more information of the data, therefore s/SQRT(n) < s; that is to say the variability in the sample of 'sample means' is less than the variability in the individual sample.

TL;DR

-use 's' for the sake of comparison.

-For statistical tests use 's/SQRT(n)' when say finding the distribution or confidence interval for the sample means.

-For statistical tests use 's' when say finding the distribution or confidence interval for individual samples.