Hi,
I have a float called currentDbLevel that varies between -100 and 0 and is incremented based on input from an encoder. There are some conditionals that do things based on this floats value but one of them doesn't seem to do what I want and I'm not sure why. I've simplified some of the code to show here.
-0.0 is special in IEEE 754 arithmetic. The software (or hardware in bigger machines) will treat -0.0 as +0.0 in comparison. If you do (-0.0) + (-0.0) you will get +0.0. Under IEEE rules, the only ways to test if something is -0.0 is to use the signbit or copysign/copysignf functions.
Since currentDbLevel is a local variable, it is NOT initialized. You are wasting your time trying to decipher any meaning from this code, since you are comparing an uninitialized variable to an initialized one.
maxw:
Thanks for the replies and sorry for the late response!
All my variables are global and declared before "void setup()"
I worked out the problem, it was rounding. I had to go to 7 decimal places to see the issue:
YES -0.0001001 < -0.0001000
Thanks again, I never new serial.print did rounding, I thought it just printed variables exactly as they are.
With computers and floating point, there is no such thing as "exactly as they are".
The number 0.1 can't be represented exactly as a binary floating point number, just for example.
This is the same as the fact that 1/3 can't be represented exactly in decimal. It comes out as 0.33333333333333333...
In binary, 0.1 (1/10) comes out as a series of repeating binary digits that approximate 1/10. That's true for lots of different numbers that are exact in decimal. (In binary, you can only represent the values 1/2, 1/4, 1/8, 1/16, 1/32, etc. Floats sum up those binary fractions to approximate your decimal value.)
DuncanC:
With computers and floating point, there is no such thing as "exactly as they are".
The number 0.1 can't be represented exactly as a binary floating point number, just for example.
For binary floating point, yes.
However, to put on my nerd compiler hat, some computers have decimal floating point as well as binary floating point. Under decimal floating point, the digits are in decimal not in binary, so 1.2 has an exact representation. IBM power6/power7/power8 servers do have hardware support for decimal floating point (using the _Decimal32, _Decimal64, and _Decimal128 types). Intel/AMD x86 platforms using the GCC compiler have had software emulation of decimal floating point for several years now. Neither of these platforms are commonly used in microprocessors like the Arduino, so for this audience, it isn't an option.
If you program in COBOL or PL/1, those languages had decimal floating point types as well and the compiler would have to simulate the decimal arithmetic if the machine did not provide hardware support for decimal arithmetic. Similarly, I worked on a language (DG/L) 30 years ago that is now dead, that allowed you to add strings, and it would do it in decimal.
But the point is it isn't always true that floating point does not have an exact decimal representation.