Solved – What determines the precision of uncertainties

measurement errorprecisionuncertainty

What limits the precision with which you can describe the uncertainty of a measurement?

I will describe two examples that feel qualitatively different, but I am not sure if they are quantitatively different in how you would deal with the uncertainty.

  1. Measuring the length of a pencil with a ruler that is graduated in units of cm. With such a measurement device you can accurately determine the length of the pencil down to the whole cm, and then you can estimate one additional decimal place further (mm). For example, one might determine that the pencil is 7.3 cm long. The uncertainty of the measurement lies with in tenths place value of the measurement. One estimate on the uncertainty might be .5 mm, since this is essentially the largest reasonable error one could make without causing the value to 'bleed over' into another mm unit. So one report the pencil's length as: 7.3 ± 0.05 cm. Here, only one measurement is being preformed on a single object. The uncertainty reported is the uncertainty in the last decimal place, and as such only has a single significant figure. It would not be reasonable to report values such as: 7.30 ± 0.05 cm or 7.3 ± 0.050 cm since these imply more precision than is possible to measure with this device.

    Incidentally, in this example, what other ways might there by to estimate the uncertainty in the measurement besides that 'reasonable overestimate' of .5 mm that I described? Also, how does the uncertainty in the accuracy of the ruler come into play? In my example, I assumed that the ruler, essentially, gave us our definition of what a cm is, but in actuality the graduation markings on the ruler could be drawn with some error (albeit a small one).

  2. Measuring the lengths of a set of nearly identical pencils. Through repeated measurements of these objects (one measurement per pencils, repeated for many pencil) one can collect a set of measurements for the length of this type of pencil. From this, one can determine the average length of this type of pencil (L). An estimate of the uncertainty in the length of this type of pencil is could be found as the standard deviation of the measured values (σ). At this point, one can report the pencils' lengths as: L ± σ. We will say that these measurements were preformed with the same ruler as in part 1, namely one graduated in units of cm, with individual measurements recorded with estimates down to mm precision. As such, L should have mm precision, at most (let's say 7.3 cm again). What about σ?

    I see three possible scenarios:

    A) The distribution of measurements is extremely tight, for example σ = 0.0000034873 cm (intentionally displaying something that mimics a calculator output). Here, the first significant digit of the uncertainty is for a value far smaller than the minimum precision of the measuring device. Would one report an uncertainty of 0 mm, or 3E-6 cm? Neither of these feel right. How does the uncertainty from (1) come into play?

    B) The distribution of measurements is extremely broad, for example σ = 4.3289483 cm (obviously these are hardly nearly identical pencils anymore as was previously assumed). Now the most significant digit in the uncertainty is the same order of magnitude as the average. So one would then neglect the mm precision of the measurement and report the length to be 7 ± 4 cm? Here I round the average value to the same most significant decimal (whole cm, in this case).

    C) The distribution is in between the two extreme cases mentioned above, for example σ = 0.295401 cm. Now it seems reasonable to simple round everything to the nearest mm and report the length as 7.3 ± 0.3 cm.​

    Would there ever be a reason where one is justified in reporting an uncertainty with more than one significant figure? For example, 7.3 ± 1.3 cm?

Best Answer

Your pencil example is peculiar. You'll see why when I describe how this precision thing works in a typical case.

Say, you're measuring a room with a measuring tape that has 1 mm ticks. You get 10033 mm measurement. The way to report this is 10033$\pm$0.5 mm. You usually take the half the tick as an a priori uncertainty $\sigma=0.5$.

To increase precision you measure the room several times: 10033, 10041, 10031. Now you can calculate the standard deviation $\sigma_3\approx 5.3$ mm, so you can throw out the a priori uncertainty, and report $10035\pm 5$. You see how $\sigma_3>\sigma$.

Your ruler is graded with 1 cm ticks then the a priori precision is usually reported as 0.5 cm, but you can certainly eye ball with close to 1 mm precision, so maybe 0.5 mm is an appropriate precision to report. Also, who would measure a pencil with 1 cm graded ruler? I would say it's not an appropriate instrument for the task. In fact I have never seen a ruler with 1 cm ticks, even the measuring tapes used in construction would have 1 mm marks.

Get a ruler with 1 mm grading or a standard caliper, where the a priori precision is going to be much lower than the uncertainty calculated from the few repeated measurement. This way you don't need to deal with interaction of your instrument's precision and the subject of your interest.

Related Question