[Math] the second smallest C single precision positive floating number there is? (IEEE754)

computer sciencefloating point

Documentation on how single precision floating point numbers work in C can be found in various good places such as:

  1. IEEE754 32-bit single precision format
  2. https://en.wikipedia.org/wiki/Single-precision_floating-point_format
  3. https://stackoverflow.com/questions/7644699/how-are-floating-point-numbers-are-stored-in-memory
  4. https://www.cprogramming.com/tutorial/floating_point/understanding_floating_point_representation.html

To simplify, it is formed with 32 bits from where:

  • 1 bit is for signal (bit 31)
  • 8 bits for the exponent (bits 30 to 23), being 127 the zero "offset"
  • 23 bits for the mantissa (bits 22 to 0), leaving the leading 1 out

From wikipedia you can see that:

7f7f ffff = 0 11111110 11111111111111111111111 = $(1 − 2^{−24}) × 2^{128} ≈ 3.402823466 × 10^{38}$ (max finite positive value in single precision)

0080 0000 = 0 00000001 00000000000000000000000 = $2^{−126} ≈ 1.175494351 × 10^{−38}$ (min normalized positive value in single precision)

That is, the minimum and maximum represented in binary and a decimal approximation.

I specially like the reference link number 4 above, where it states there is an epsilon.

How would that "step" goes in binary and to what decimal represented it will go starting from zero and moving up (positive) in the smallest steps possible?

What about this statement from the size of the steps, in the wikipedia link above (number 2), in this section

The steps described there goes for integer. Does it apply to decimal discrete jumps also? In other words, depending on the size of the number $n_{1}$, the immediately "next" number $n_{2}>n_{1}$ will move further or closer to it? (error $|n_{1}-n_{2}|$ will vary?)

Best Answer

With the machine epsilon for IEEE single precision $\epsilon_M = 2^{1-24} = 1.1920928955078125\times 10^{-7}$ N.J. Higham (Accuracy and Stability of Numerical Algorithms, 2nd ed., see [1]) gives the following

Lemma 2.1. The spacing between a normalized floating point number $x$ and an adjacent normalized floating point number is at least $0.5\epsilon_M |x|$ and at most $\epsilon_M |x|$

In Fig.2.1 you see the relative distance graphics for single precision, showing a wobbling saw-tooth curve.

Here are some actual values for the spacing at certain single precision numbers (showing hex instead of binary, columns: first the number, second the next number, third second-first):

Spacing at min. normalized single = 1.175494351E-0038
00800000   00800001 1.40129846432482E-0045
00800001   00800002 1.40129846432482E-0045

Spacing at 1
3F7FFFFF   3F800000 5.96046447753906E-0008
3F800000   3F800001 1.19209289550781E-0007
3F800001   3F800002 1.19209289550781E-0007

Spacing at 64
427FFFFF   42800000 3.81469726562500E-0006
42800000   42800001 7.62939453125000E-0006
42800001   42800002 7.62939453125000E-0006

You can see the jumps between the values for a power of 2 and its predecessor.