Right, at the first-year undergraduate level, this is how you typically do it.
- Firstly, if you were to only take one measurement for x,y and z; use the instrument uncertainty (half its smallest scale or whatever is stated by the manufacturer), and propagate its error to f. So your final answer would look like:
where
Now, there is a better way to estimate the uncertainty of f. This is done by taking multiple measuremenets instead of only 1 measurement. By taking multiple measurements, we can obtain the standard deviation and subsequently the standard error (aka standard deviation of the mean).
There is a caveat: the minimum number of measurements that should be carried out to utilize the standard error is N >= 5.
- When multiple measurements are taken, calculate the mean of each variable alongside its standard error (by dividing the standard deviation of each variable by sqrt(N)). In the case of f, solve for it by using the mean of each variable. In the case of delta(f), solve for it by propagating the standard errors of each variable through the same manner as shown in the equation above. This will give you the best estimate of f +- delta(f) in the case of multiple readings.
Now, there is still the uncertainty of the instrument in hand! (Precision of 1mm as you stated). In the first-year undergraduate level, this instrument uncertainty is typically ignored (when taking multiple readings). However, if there is a large discrepancy between experimental and theoretical value, this uncertainty can be taken into account. This is done by considering the instrumental uncertainty as a form of systematic error.
- To include instrumental uncertainty, let delta(f) calulcated above be considered as delta(fRandom). To calculate delta(fSystematic), multiply the instrumental uncertainty with the mean values of each variable to get the systematic uncertainty for each variable. Then, propagate the error for delta(fSystematic). The resultant/net uncertainty can be expressed the sum of the terms in the quadrature:
Therefore the best way to describe an experimental value in this case would be
Hi: your question sounds like you might have a time series and the answer I think depends critically on what 0.001 represents ? If you can find that out or know that now, once you have that, the rest is probably not so difficult.
In general statistical modelling, there's usually an underlying population model that makes assumptions about how the measurements are generated. Once that is given, then any statistic can be constructed hopefully from first principles( but maybe not. see below for more on that ).
For example, suppose the underlying model was say
$y_{it} = \mu_i + \epsilon_{it} ~~~~~\forall i = 1,2,3$
so the population is actually 3 different populations each with its own mean and standard deviation. where $\epsilon_ti$ has some distribution say $N(0, \sigma^2_{\epsilon_i})$ and the $u_{i}$ were the numbers you stated,
namely (1.232, 1.197, 1.292). Then, the statistics are pretty straightforward depending on whether you take an average or a sum or whatever.
For example, in the case above, if you
wanted to known the standard deviation of a new value generated from group 1,
with known mean, say 1.232, then it's $\sqrt(.001)$ because that's the standard deviation of $\epsilon_{1}$.
On the other hand, maybe the true known $\mu_{1}$ is really 1.21 and you observed a random observation represented by the observed value 1.232 and suppose you don't know the $\sigma_{1}$. Then in that case, you can estimate the standard deviation of a new observation from group 1 using the sample of measurements that came from group 1.
Or maybe the measurements come from one population with an overall known $\mu$ say equal to 1.2 rather than a $\mu_{i}$. So, there are a lot of possibilities and maybe one of them applies to your problem. I hope that helps and I can try to give specifics if such a description is possible.
P.S: Note that if you take the $t$ subscripting out of the model description above, then there's a whole non time series area called analysis of variance that delves into this in much, much deeper detail and calculates more complex statistics. Based on your question, I assumed that wasn't what you wanted ? If it is, then I'm not the person to explain it.
Best Answer
It is true that the median is more robust (subject to outliers) than the mean.
My understanding is that the reason statistics tends to use the mean (and squared errors for that matter) is that in the long run, on average, assuming symmetrical distributions, they get closer to the true answer than medians and absolute deviations.
However, if you are not interested in being correct on average over the long run and are more interested in being close to right on any given point estimate, the median is probably a reasonable statistic for you to select.
It is unclear from your question what exactly you want to do with your "error estimate". Do you want to use it to do a statistical test? If all you need is another summary statistic to describe the central tendency of dispersion around your observed measure of central tendency AND you want to continue to leverage whatever 'advantage' the median is giving you... then I would recommend calculating the median absolute deviation. That is the median of |X-Mdn|.