I found if a floating value is assigned to a variable, sometimes the value will experince a small abount change, as seen below. Why is it like that?
MATLAB: Small amount of value change in a floating value assignment
MATLABvalue assignment
Related Solutions
MATLAB uses IEEE 754 Binary Double Precision to represent floating point numbers. All floating point scheme that use binary mantissas cannot exactly represent 1/10, just like finite decimal representation schemes cannot exactly represent 1/3 or 1/7 .
IEEE 754 also defined a Decimal Double Precision representation scheme, which can represent 2.123 exactly. However, computing those values in software is much slower. The only systems I know of that implement IEEE 754 Decimal Double Precision in hardware are the IBM z90 series.
If you need a certain specific number of decimal places to be stored, then use rationals with a power-of-10 denominator.
"MATLAB calculated the wrong number!"
Nope, MATLAB calculated the right number.
The reason is simply that the value -2.9982 cannot be represented exactly using binary floating point numbers, so any calculation involving that number will inevitably accumulate some floating point error.
In exactly the same way that you cannot write 1/3 exactly using a finite decimal fraction, it is impossible to store -2.9982 exactly using a finite binary floating point number. So although you might think that you have -2.9982, in fact the real values stored in computer memory is slightly different from this. And so when you perform some calculation on it, the calculation collects this floating point error, so that the final total is not going to be exactly equal to -14.991 (unless your calculation involves only powers of two).
What you see printed in the command window is the closest representation to 5 or 16 significant digits, depending on your current format setting. To see the "real" value download James Tursa's FEX submission:
Use James Tursa's num2strexact and you will see that that number does not really have the exact value -2.9982. All you are looking at is a representation of those floating point numbers displayed in the command window, to the precision defined by your format setting. Just because you see -2.9982 displayed in the command window tells you nothing about the real floating point number's value.
You need to learn about the limits of floating point numbers. Start by reading these:
This is worth reading as well:
Best Answer