[Math] the meaning of the range and the precision

computer sciencefloating pointterminology

Using the scientific notation:

$$3.14 = 0.314 \times 10^1$$

From Tanenbaum's Structured Computer Organization, section B.1:

The range is effectively determined by the number of digits in the exponent and the precision is determined by the number of digits in the fraction.

I know how this notation works but I am asking about the meaning of the two words.

Why the book is calling them the range and precision? What do they exactly mean?

Best Answer

The range is determined by the biggest and smallest (positive) numbers you can represent. Clearly, with two digits in the exponent you can write numbers from approximately $10^0$ (or $10^{-99}$ if you allow signs) to $10^{99}$ (even more by clever choce of mantissa). With one-digit exponents the range is much smaller: from $10^{-9}$ to $10^{9}$.

The precision is determined by the smallest (relative) difference between two representable numbers. The difference between $3.14$ and $3.15$ is about $0.3\%$ of the values (and also the difference between $6.02\cdot 10^{23}$ and $6.03\cdot 10^{23}$ or between $1.60\cdot 10^{-19}$ and $1.61\cdot 10^{-19}$ is of the same - relative - magnitude. On the other hand $3.1415926535897932384626433$, $6.02214129\cdot10^{26}$ and $1.602176565\cdot10^{-19}$ carry a lot more precision. The latter two are numerical values for the Avogadro constant and the elementary charge. Obtaining thes values required very precise measurements. The precision does not depend on the exponent. That is, if the value of $e$ in other unit systems is $1.602176565\cdot10^{6}$, the same careful prcision in the measurements is required to obtain so many digits.

Related Question