[Physics] the difference between Uncertainty and Standard Deviation

definitionerror analysisexperimental-physicsstatisticsterminology

In physics lab class we are learning about uncertainty and propagation of error. Last week we learned about how to find uncertainty of a calculated value using the equation $$\delta_f = \left(\frac{\partial f}{\partial x}\right)\delta_x + \left(\frac{\partial f}{\partial y}\right)\delta_y$$ if $f$ is a function of x and y. My teacher showed us how this equation comes from the tailor series.

This week we learned about how to find the statistical version of uncertainty by using the equation $$\sigma_f = \sqrt{\left(\frac{\partial f}{\partial x}\sigma_x\right)^2 + \left(\frac{\partial f}{\partial y}\sigma_y \right)^2}$$

My teacher tells us that this is the statistical version of uncertainty that gives us 68 percent of the total uncertainty. I am having a hard time with this definition. It seems if this were true we could just multiply the equation given earlier by 0.68.

From what I have learned in my statistics class is that when you add standard deviations, you have to add their squares (variances). I can see how this equation would make sense if we were trying to find the standard deviation of a calculated value, but my teacher tells us we plug in the uncertainty for x in $\sigma_x$ and the uncertainty for y in $\sigma_y$.

Are the two symbols $\delta_x$ and $\sigma_x$ representing the same thing? I am confused how the second equation is valid. Is the second equation used to find the standard deviation or the uncertainty? Do physicists just use the word standard deviation to refer to uncertainty? Why don't we plug in the standard deviations of the distributions of x and y for $\sigma_x$ and $\sigma_y$, which can be found using $\sqrt{\frac{1}{n-1}\Sigma_i (x_i – \bar{x})}$. If $\sigma_f$ truly is the standard deviation of the distribution of calculated $f$, then plugging in the uncertainties for $\sigma_x$ and $\sigma_y$ doesn't make sense. Wouldn't this mean that you could manipulated the standard deviation $\sigma_f$ just by what values you choose for your uncertainties.

Also, In my lab class, we are taught to choose our uncertainties based on what we think the limitations of our instruments are. However, I have seen a few other people use the standard deviation of their measurements and call this the uncertainty. Is this the more common method? I think this would clear up some of the problems I am having.

Best Answer

"Do physicists just use the word standard deviation to refer to uncertainty?" Often we assume that results of our measurements are normal distributed (we can argue that, if we don't know the reason for the deviation from the "real" value, then it is most likely due to many factors and if you have many arbitrarily distributed factors influencing a variable, then that variable follows the normal distribution - central limit theorem). Then we can use some measure of the width of the normal distribution as our uncertainty, e.g. the std-deviation. But of course you are basically free in choosing what you use, one sigma might be ok now, but often multiples of sigma are used. You might also know that whatever you are measuring is in fact not normal distributed, then you would have to choose some other measure of uncertainty. So when it comes to uncertainties there is no one-size-fits-all solution. However, Gaussian error propagation based on standard deviations is the go-to if there are no reasons against it and in that case uncertainty and some multiple of sigma would be the same thing.

Now to the question what values to put in for the sigmas. Let me mention, that $\sqrt{\frac{1}{n-1}\sum_i\left(x_i - \bar{x}\right)^2}$ is not the standard deviation but an estimator for the "real" standard deviation of the distribution, that itself has an uncertainty (if it were the real value of the standard deviation, that formula should give the same result for every sample). So "why don't we plug in the standard deviations of the distributions"? Because you might have a better guess for the standard deviation, than the estimator above.

"Wouldn't this mean that you could manipulated the standard deviation σ just by what values you choose for your uncertainties." Yes, you can. Usually you would have to describe in detail why you chose some measure of uncertainty and others might be critical of your choice and contest your results because of that.

Related Question