The ‘correct’ way of determining uncertainty of an average value from multiple measurements

data analysiserror analysisMeasurementsstatistics

I am very confused as to what the correct way is to calculate the uncertainty of the average of values ($x_{avg}$) in a data set of measurements $(x_1 … x_N)$. I have found at least four different ways of doing it around the internet, as follows:

  • Method 1: Uncertainty is the average of the deviations from the mean. That is, $$\Delta x_{avg} = \frac{(|x_{avg} – x_1| + … + |x_{avg} – x_N|)}{N}$$ (as described in this Youtube video)

  • Method 2: $\Delta x_{avg} = \frac{R}{2}$, where $R$ is the range of the values (from this Youtube video)

  • Method 3: $\Delta x_{avg} = \frac{R}{2\sqrt{N}}$ from this document

  • Method 4: $\Delta x_{avg} = \frac{\sigma}{\sqrt{N}}$, $\sigma$ being the standard deviation of the data set (from here)

Which is the correct way?

Best Answer

You evaluate the mean and express the uncertainty in the mean as the standard deviation of the mean.

Your estimate of mean of the sample with $n$ values is $m = {1 \over n} \sum_{j = 1}^{n} y_{j}$ where $y_{j}$ is the $j^{th}$ value in the sample. The standard deviation of the mean for the sample is $S = \sqrt{s^2 \over n}$ where $s = \sqrt{{\sum_{j= 1}^{n} (y_{j} - m)^2} \over {n - 1} }$ is the standard deviation for the sample. You report $m \pm S$.