Documentation on how single precision floating point numbers work in C can be found in various good places such as:
- IEEE754 32-bit single precision format
- https://en.wikipedia.org/wiki/Single-precision_floating-point_format
- https://stackoverflow.com/questions/7644699/how-are-floating-point-numbers-are-stored-in-memory
- https://www.cprogramming.com/tutorial/floating_point/understanding_floating_point_representation.html
To simplify, it is formed with 32 bits from where:
- 1 bit is for signal (bit 31)
- 8 bits for the exponent (bits 30 to 23), being 127 the zero "offset"
- 23 bits for the mantissa (bits 22 to 0), leaving the leading 1 out
From wikipedia you can see that:
7f7f ffff = 0 11111110 11111111111111111111111 = $(1 − 2^{−24}) × 2^{128} ≈ 3.402823466 × 10^{38}$ (max finite positive value in single precision)
0080 0000 = 0 00000001 00000000000000000000000 = $2^{−126} ≈ 1.175494351 × 10^{−38}$ (min normalized positive value in single precision)
That is, the minimum and maximum represented in binary and a decimal approximation.
I specially like the reference link number 4 above, where it states there is an epsilon.
How would that "step" goes in binary and to what decimal represented it will go starting from zero and moving up (positive) in the smallest steps possible?
What about this statement from the size of the steps, in the wikipedia link above (number 2), in this section
The steps described there goes for integer. Does it apply to decimal discrete jumps also? In other words, depending on the size of the number $n_{1}$, the immediately "next" number $n_{2}>n_{1}$ will move further or closer to it? (error $|n_{1}-n_{2}|$ will vary?)
Best Answer
With the machine epsilon for IEEE single precision $\epsilon_M = 2^{1-24} = 1.1920928955078125\times 10^{-7}$ N.J. Higham (Accuracy and Stability of Numerical Algorithms, 2nd ed., see [1]) gives the following
Lemma 2.1. The spacing between a normalized floating point number $x$ and an adjacent normalized floating point number is at least $0.5\epsilon_M |x|$ and at most $\epsilon_M |x|$
In Fig.2.1 you see the relative distance graphics for single precision, showing a wobbling saw-tooth curve.
Here are some actual values for the spacing at certain single precision numbers (showing hex instead of binary, columns: first the number, second the next number, third second-first):
You can see the jumps between the values for a power of 2 and its predecessor.