[Physics] What limitations are there in measuring physical properties accurately

computational physicserror analysisexperimental-physicsgravityweight

In a StackOverflow answer, I attempted to explain why a 32-bit float was perfectly adequate for representing the questioner's weight measurement:

Physical properties are inaccurately measured anyway:

  • no measuring instrument is perfectly calibrated or completely reliable
    (e.g. weighing scales cannot adjust for the exact gravitational field
    at the precise time and place in which they are used, or may have
    undetected mechanical/electrical faults); and

  • no measuring instrument has infinite precision—results are actually
    given across some interval, but for convenience we often adopt a
    "shorthand" representation in which that information is dropped in
    favour of a single number.

Consequently, all that anyone can ever say about a physical property
is that we have a certain degree of confidence that its true value
lies within a certain interval: so, whereas your question gives an
example weight of "5 lbs 6.2 oz", what you will actually have is
something about which you're, say, 99.9% confident that its weight
lies between 5 lbs 6.15 oz and 5 lbs 6.25 oz.

Seen in this context, the approximations of a 32-bit float don't
become even slightly significant until one requires extraordinarily
high accuracy
(relative to the scale of one's values). We're talking
the sort of accuracy demanded by astronomers and nuclear physicists.

But something about this has been bugging me and I can't quite put my finger on what it is. I know that it's completely unimportant for the purposes of that StackOverflow answer, but I am curious:

  1. Is what I have said (about errors and uncertainty in measuring physical properties) completely, pedantically correct?

    I acknowledge that knowing the gravitational field is only relevant if one wishes to ascertain a body's mass, however at the time it struck me as a good illustration of experimental errors: systematic error from "imperfect calibration" (i.e. to the gravitational field at the scales' location of use) and random error from the instrument's "unreliability" (i.e. fluctuations in the field over time).

    Is there is a similarly simple and accessible illustration of error that is more relevant to weight? Perhaps the inability to perfectly calibrate springs, together with the randomness of their precise behaviour due to quantum effects? (If that's not complete and utter nonsense, then I'm truly amazed!)

  2. Have I omitted any further points that would help to justify my conclusion (that a 32-bit float is adequate for the OP's needs)?

    Perhaps I have not fully explained the types or risks of experimental error? Perhaps I have not fully explained the limitations of physical measurements?

  3. The final sentence quoted above (re astronomers and nuclear physicists) is, of course, an exaggeration: is there a better analogy?


UPDATE 

I decided to remove from my original answer this rant about physical measurement, since it was pretty tangental to the purpose of that question. Nevertheless, I am curious to find a good answer to this question.

Best Answer

For point 1) you are correct no instrument can be precisely calibrated or has infinite precision. This is in part limited by how well the corresponding SI units are known NPL has a nice little faq on this. Similarly all measurements will have some noise (possibly very small) which limit precision.

Personally I wouldn't use weight as an example. There are several reasons. Firstly as you point out it is easy to confuse ideas of mass and weight. If you want to be completely correct this is a confusion you don't need. Another concern is that mass is currently the only fundamental unit that is still defined by a physical artifact (the standard kilogram) rather than by physical constants so the definition of a kilogram is pretty uncertain.

In my opinion a better example would be measuring a metre. A metre is defined as "the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second.". Now the question becomes how good is your clock. Now I'm going to use an atomic clock which is accurate to $1$ in $10^{14}$, but this clearly has some small uncertainty. In reality you probably don't use something this accurate but something which has been calibrated against something that's calibrated like this and so will be much less accurate.

As a side note this is not how a metre is actually calibrated, people actually use things like a frequency stabilised laser, which has a very well known wavelength, and an interferometer to count the fringes seen over a distance.

For 2) I don't think you need to say any more. There are lots of things you could say but you are trying to answer a specific question, not write a book. The NPL:begginers guide to uncertainty of measurement provides a good introduction to some of the topics, but iss by no means comprehensive.

For 3) I would say your analogy isn't far wrong. Its only really scientists that care about this sort of accuracy. Possibly also anyone involved in micro-manufacture, think Intel. Even most engineers don't care (they tend to double stuff just to be certain ;) ). I think the best way to show it is to do hat you did in your actual answer and give this as a percentage error to show how small it actually is.

Related Question