Very strange behavior from floor()

I’m getting some very strange behavior from floor(), here’s a complete sketch showing the problem:

#include <Arduino.h>
#include <math.h>

void setup() {
  Serial.begin(9600);

  float x = 8.3199996;

  Serial.print("x=");
  Serial.println(x, 7);

  Serial.print("x truncated to three decimal places=");
  Serial.println(floor(x*1000.)/1000., 7);
} 
void loop() { }

The above prints:
x=8.3199996
x truncated to three decimal places=8.3199996

When obviously (at least to me) the second line should have printed “8.3190000”.

In fact, the following equivalent “standard C” program:

#include <stdio.h>
#include <math.h>

main()
{
    float x = 8.3199996;
    printf("x=%.7f\n", x);
    printf("x truncated to three decimal places=%.7f\n", floor(x*1000.)/1000.);

}

when compiled and run on my Ubuntu desktop, produces the expected result:
x=8.3199997
x truncated to three decimal places=8.3190000

Does the above floor() behavior makes any sense to you?

Note floor is a double function that takes a double value and produces a double result. There is a floorf function that takes a float value and produces a float result.

On your PC, double is 64-bits (1 sign bit, 11 exponent bits, 52 mantissa bits, 1 implied mantissa bit), while float is 32-bits (1 sign bit, 8 exponent bits, 23 mantissa bits, 1 implied mantissa bit). On the Arduino, doubles are 32-bits instead of 64-bit. This doesn’t meet the ISO C standard, which requires double to be at least 64-bits, but it is common in various miroprocessors that are emulating floating point, to save space by not implementing 64-bit.

The C language was developed on a machine (PDP-11) where it was more convenient to do floating point arithmetic in 64-bit, so the C language is biased towards using double (constants are double by default), but now you can have expressions calculated in single precision if the compiler prefers if both sides are single precision.

#include <math.h>
#include <stdio.h>

float x = 8.3199996;

int main (void)
{
  printf ("floor  ((%f * 1000)/1000) = %f\n", x, floor  (x * 1000.) / 1000.);
  printf ("floorf ((%f * 1000)/1000) = %f\n", x, (double)(floorf (x * 1000.f) / 1000.f));
  return 0;
}

However, because most machines do floating point in binary instead of decimal, you will get round off errors if you do the multiply by 1000, floor, and then divide by 1000.

Thanks for the very detailed response; I tried your program here on my Ubuntu desktop and got:

floor ((8.320000 * 1000)/1000) = 8.319000
floorf ((8.320000 * 1000)/1000) = 8.320000

Please notice that the second result (using floorf() and 1000.f constants which, if I read your intent correctly, should emulate very closely the Arduino behavior) still looks much more reasonable than the one from the Arduino... (even if it's rounding up instead of down as floor() should -- certainly because of the 32-bit FP rounding issues you mentioned).

So, can you tell why the result on the Arduino is so strange?

Floating point libraries/functions are notoriously buggy. Even when using the same format (double, float, IEEE...) they can produce different results, sometimes wildly different.

Another thing to realize that since the representations are binary floating point, most numbers do not have a precise representation in the floating point format. This usually leads to some very peculiar results.

In short, floating point math on Computers, do not follow the 'logical' rules we were taught in school. Unlike school, the computer is only working with an approximation of the actual number, and what chaos theory has taught us is that close (but not identical) beginnings can produce wildly divergent results...

All that said, I would suspect that what you are seeing in the Arduino sample you provided is because the compiler is optimizing away your 'calculation'

Wanderson. thanks for your input.

I'm well aware of how computers do arithmetic, and the problems with different representations of numbers in decimal and in binary forms, and the issues resulting thereof (binary repeating fractions of decimally exact numbers and all that).

But my main question is: how come the Arduino shows a result so absurd, when the same program, running on a very similar environment (ok, it's another CPU, but in both cases not only the compiler but also the C library comes from the nice GNU folks) shows a much more reasonable answer? Aren't both supposed to implement standard IEE-754 floating-point 32-bit arithmetic?

Regarding the optimization, please excuse me but this would not make sense at all, as the first "1000.f" is in an expression that's being passed to a function, and the second one is in an expression outside the function....such an optimization would not only be wrong in the above case, but also IMHO guaranteed to break a lot of other programs...

But anyway, I'm not here to bash the Arduino or disparage your explanations... I have a concrete problem to solve, that is, I want to strip a float variable of all but the first N significant decimal places. How would you accomplish this in a reasonably reliable manner on the Arduino, as the "obvious" solution above will simply not work?

before you dismiss the idea that the optimizer is doing away with your calculation, I suggest that you look at the assembly output and verify one way or another. I could easily see the optimizer seeing a multiplication followed by a division of the same constant being wiped away. Granted the function call should prevent that, but clearly the function is NOT being called/utilized. If it was the resulting number would not be an EXACT match for the original...

wanderson:
before you dismiss the idea that the optimizer is doing away with your calculation, I suggest that you look at the assembly output and verify one way or another.

Of course you are correct: practical proof beats theoretical assurances any time. Will do, and post the results here.

I could easily see the optimizer seeing a multiplication followed by a division of the same constant being wiped away.

When they assemble the teams for building the Greatest American Compiler, I hope they put you on the Testing team and not on the Implementation team! :wink:

Granted the function call should prevent that, but clearly the function is NOT being called/utilized. If it was the resulting number would not be an EXACT match for the original...

It's possible (I would say even "probable") that the function is being called but it's returning the unmodified parameter as a result, or even a reasonable facsimile thereof... I will check for this when I produce and verify its assembly code.

Please stay tuned.

Yep, first rule of debugging is to alway check the stuff that is easy to check first, before looking at those things that will take more time...

Of course the real 'trick' is realizing/knowing what is easy to check first... :slight_smile:

jm478:

wanderson:
before you dismiss the idea that the optimizer is doing away with your calculation, I suggest that you look at the assembly output and verify one way or another.

Of course you are correct: practical proof beats theoretical assurances any time. Will do, and post the results here.

Holy Cow, Batman er, I mean, Wanderson: it seems you were right after all!!! Here's it straight from the .s file:

.LC1:
        .string "x truncated to three decimal places="
[...]
.LM7:
        movw r24,r28
        ldi r22,lo8(.LC1)
        ldi r23,hi8(.LC1)
        call _ZN5Print5printEPKc
        .stabn  68,0,25,.LM8-.LFBB2
.LM8:
        movw r24,r28
        ldi r20,lo8(0x41051eb8)
        ldi r21,hi8(0x41051eb8)
        ldi r22,hlo8(0x41051eb8)
        ldi r23,hhi8(0x41051eb8)
        ldi r18,lo8(7)
        ldi r19,hi8(7)
        call _ZN5Print7printlnEdi

Now I don't know much AVR assembly language, but this surely doesn't look like floor() is being called... let's not even talk about the multiplication and division operations being performed!!!

I could easily see the optimizer seeing a multiplication followed by a division of the same constant being wiped away.

In this you are right a second time: here's the same part of the code, after recompiling with "-O0" (ie, disabling all optimizations):

.LM10:
        ldi r24,lo8(Serial)
        ldi r25,hi8(Serial)
        ldi r18,lo8(.LC1)
        ldi r19,hi8(.LC1)
        movw r22,r18
        call _ZN5Print5printEPKc
        .stabn  68,0,25,.LM11-.LFBB2
.LM11:
        ldd r22,Y+1
        ldd r23,Y+2
        ldd r24,Y+3
        ldd r25,Y+4
        ldi r18,lo8(0x447a0000)
        ldi r19,hi8(0x447a0000)
        ldi r20,hlo8(0x447a0000)
        ldi r21,hhi8(0x447a0000)
        call __mulsf3
        movw r26,r24
        movw r24,r22
        movw r22,r24
        movw r24,r26
        call floor
        movw r26,r24
        movw r24,r22
        movw r22,r24
        movw r24,r26
        ldi r18,lo8(0x447a0000)
        ldi r19,hi8(0x447a0000)
        ldi r20,hlo8(0x447a0000)
        ldi r21,hhi8(0x447a0000)
        call __divsf3
        movw r26,r24
        movw r24,r22
        movw r18,r24
        movw r20,r26
        ldi r24,lo8(Serial)
        ldi r25,hi8(Serial)
        movw r22,r20
        movw r20,r18
        ldi r18,lo8(7)
        ldi r19,hi8(7)
        call _ZN5Print7printlnEdi

So it seems my recommendation for having you included in the Great American Compiler testing team wasn't in vain after all! :wink:

Seriously now, this is one EGREGIOUS compiler error... HOW can one trust ANYTHING coming out of such a compiler? And more importantly, what is the solution? Working with the optimizer turned off all the time? If so, where can I purchase an additional 512KB of flash for my Arduino? ;-/

I am not altogether convinced that it is an error. The problem with compiler optimizations is that to maximize effectiveness in 95% of the cases they end up having to make these kind of errors... Typically the more you try to code the optimizer so that it eliminates these type of errors the more you reduce the optimizers efficiency in the majority of cases... So, in my opinion, this may be more of an undocumented 'feature', rather than a 'bug'

The solution is really simple, leave optimizations on, but code in such a way that you verify assumptions (I like the assert macro). And then anytime a section doesn't appear to be working as expected look at the assembly output and see what the compiler is producing.

Or you can just code in assembler directly is you are a masochist! :slight_smile:

wanderson:
I am not altogether convinced that it is an error.

Ouch! Really? :fearful:

The problem with compiler optimizations is that to maximize effectiveness in 95% of the cases they end up having to make these kind of errors... Typically the more you try to code the optimizer so that it eliminates these type of errors the more you reduce the optimizers efficiency in the majority of cases...

IMHO, a compiler's first responsibility is to produce correct code... to produce fast code is but a distant second.

So, in my opinion, this may be more of an undocumented 'feature', rather than a 'bug'

I'm sorry, but I do not agree with you... I think this could be defended if we were talking about some optional, extreme level of optimization like -O3 or somesuch. For gawds sake, we are talking here about a measly -Os, which basically means (or so I understood from reading the manpage) just -O2 with the potentially size-increazing optimizations disabled... -O2 is supposed to be mostly safe, if I'm understanding things correctly.

But you got the "undocumented" part right: I just checked cc.gnu.org/bugzilla and could not find anything similar to that.

The solution is really simple, leave optimizations on, but code in such a way that you verify assumptions (I like the assert macro).

Begging your pardon for my insolence, sir, but as you have already proved to be much more knowledgeable than me in this regard: I use asserts a lot, but I can't see how it would be possible to code an assert() for the above case... would you care to provide an example of how it could be done (again, for the above case)?

In the general case, let's not forget that the assert expression is code itself, and so is subject to be mangled by the compiler/optimizer right along with the code it's supposed to be verifying...

And then anytime a section doesn't appear to be working as expected look at the assembly output and see what the compiler is producing.

This is kinda impractical when we are working with a many thousands LOC long project that's being ported from another non-AVR processor in the first place...

Or you can just code in assembler directly is you are a masochist! :slight_smile:

Well, if I'm to check the compiler's generated assembly code all the time I suspect such chicanery, maybe it would be easier to dispense with it altogether and just code in assembler all the time... :-/

For those that may be reading this: I’ve tried to determine whether the compiler bug I just described was being caused by one of AVR-GCC’s multiple optimization options, and if so, which one (so I could turn it off and still keep the other optimizations on). For that, I ran the following script:

for opt in `avr-gcc -O -Q --help=optimizers | perl -ne 'print "$1\n" if (/-f(\S+)\s/);'`; do \
   /usr/bin/avr-g++ -mmcu=atmega328p -DF_CPU=16000000L -DARDUINO=100 -I. \
      -I/usr/share/arduino/hardware/arduino/cores/arduino -I/usr/share/arduino/hardware/arduino/variants/eightanaloginputs  \
      -g -O -fno-$opt -w -Wall -ffunction-sections -fdata-sections -fno-exceptions -S build-cli/trunc_test.cpp; \
      if grep floor trunc_test.s; then echo $opt BINGO; fi; \
done

That produced the just the following output (regarding one option that needs a parameter, and another that already has “no” in front of them on avr-gcc help output):
cc1plus: error: unrecognized command line option “-fno-pack-struct=”
cc1plus: error: unrecognized command line option “-fno-no-threadsafe-statics”

So, apparently there’s no single optimization option that can be turned off to avoid the bug.

For the record, this is avr-gcc version 4.5.3, installed as part of the package gcc-avr_4.5.3-3_i386.deb on an XUbuntu 12.4 LTS Precise Pangolin machine, and obtained from Ubuntu’s “universe” repository.

I will build the latest version of gcc here (v4.7.1, it seems) and see whether it has the same bug; if it does, I will report it at the gcc bugzilla.

Since this really isn't an Arduino issue,
I would take this up over on the avrfreaks site and post it in the AVR gcc forum
where the actual AVR compiler developers hang out.

http://www.avrfreaks.net

--- bill

bperrybap:
Since this really isn't an Arduino issue,
I would take this up over on the avrfreaks site and post it in the AVR gcc forum
where the actual AVR compiler developers hang out.

http://www.avrfreaks.net

Will do! Thanks for the info.

jm478:
But my main question is: how come the Arduino shows a result so absurd, when the same program, running on a very similar environment (ok, it's another CPU, but in both cases not only the compiler but also the C library comes from the nice GNU folks) shows a much more reasonable answer? Aren't both supposed to implement standard IEE-754 floating-point 32-bit arithmetic?

The Arduino gets it right, 8320 / 1000 give a binary mantissa of 1.00001010001111010111000 which represents the decimal value 8.3199996948, which should print out to 7dp as 8.3199997

Thanks for the very detailed response; I tried your program here on my Ubuntu desktop and got:

floor ((8.320000 * 1000)/1000) = 8.319000
floorf ((8.320000 * 1000)/1000) = 8.320000

No you didn't, you perhaps tried:

  floor (8.320000 * 1000) / 1000.0
  floor (8.320000 * 1000) / 1000.0

Why not do the right thing and force using single floats and then print enough digits:

  printf ("%2.10f\n", (float) floorf  (8.320000 * 1000) / 1000.0f) ; // = 8.319000

And you'll see C single floats are the same on both systems.

MarkT:

jm478:
But my main question is: how come the Arduino shows a result so absurd, when the same program, running on a very similar environment (ok, it's another CPU, but in both cases not only the compiler but also the C library comes from the nice GNU folks) shows a much more reasonable answer? Aren't both supposed to implement standard IEE-754 floating-point 32-bit arithmetic?

The Arduino gets it right, 8320 / 1000 give a binary mantissa of 1.00001010001111010111000 which represents the decimal value 8.3199996948, which should print out to 7dp as 8.3199997

I don't think you actually read (or understood) the post you are replying to...

jm478:

Thanks for the very detailed response; I tried your program here on my Ubuntu desktop and got:

floor ((8.320000 * 1000)/1000) = 8.319000
floorf ((8.320000 * 1000)/1000) = 8.320000

No you didn't, you perhaps tried:

  floor (8.320000 * 1000) / 1000.0

floor (8.320000 * 1000) / 1000.0



Why not do the right thing and force using _single_ floats and then print enough digits:


printf ("%2.10f\n", (float) floorf  (8.320000 * 1000) / 1000.0f) ; // = 8.319000



And you'll see C single floats are the same on both systems.

What you propose above is completely different than the issue at hand... again, I don't think you have really read (our understood) what I posted. I suggest you read the original posts (again, if need be).

BTW, we are far away from the original FP issues which seems to be the basis for your misunderstanding... Read the rest of the thread and you will see that it seems we actually found a bona fide avr-gcc bug, so your comments above are not only wrong, but also irrelevant...

But thanks for your attempt to help anyway.

On the contrary I believe I've understood what's going on and there isn't any compiler bug or big issue with the Arduino and the Linux C implementations being different either.

A (non-ANSI) C implementation can choose whether the default float is single or double and can elect not to implement double floats. The same goes for the size of integers, int can be 16 or 32 or 64 (or 93 if you want) bits. AVR-GCC does not claim to be ANSI compliant - indeed it couldn't fit on the smaller chips if it was I suspect.

Show me which piece of code mentioned in this thread produces a different result when optimizations are switched on and off? I can't see that there is any...

Also show me code that prints single float value 8320.0f/1000.0f differently on Linux and Arduino when the same number of decimal places are specified...

So if the hoo-hah is about the fact that AVR-GCC isn't ANSI compliant, its a valid gripe, but its not a bug.

Much ado about nothing new. The Wikipedia article on "floating point" describes a number of floating point calculation anomalies, and has this quote:

The fact that floating-point numbers cannot precisely represent all real numbers, and that floating-point operations cannot precisely represent true arithmetic operations, leads to many surprising situations.

Every computer science text I've ever seen has warned about expecting floating point calculations to be absolutely precise. It may surprise you to know it, but you're not the first to uncover an issue like this one.

Speaking qualitatively, you're defining a value whose precision is beyond the capacity of 32-bit floating point to represent. From http://arduino.cc/it/Reference/Float:

Floats have only 6-7 decimal digits of precision. That means the total number of digits, not the number to the right of the decimal point.

You're asking it to give you meaningful results to 8 decimal digits of precision. This platform wasn't built for that, and its developers tell you so quite clearly.

Quantitatively, here's what happens: 8.3199996, as a 32-bit FP number, has sign bit 0, exponent 3, offset exponent 82H, mantissa 851EB8H - 051EB8H, with the MSB suppressed - and an FP representation of 41051EB8H. That representation isn't exact, and every number between 8.3199993 and 8.3200001 has the same representation. 1000 has sign bit 0, exponent 9, offset exponent 88H, mantissa FA0000H - 7A0000H with sign bit suppressed - and an FP representation of 447A0000H. Because 1000 is a reasonably-sized integer, the representation is exact. Multiplying them yields sign bit 0, exponent 12, mantissa 103FFFFH with 011B following. That multiplication overflows the 24-bit mantissa space, so the exponent bumps to 13, offset exponent 8EH, and the mantissa shifts to 81FFFFH with 1011B following. That rounds up to 820000H, and results in an FP representation of 46020000H, which is the 32-bit FP representation of 8320, exactly. floor(8320) is, well, 8320. Dividing 8320 by 1000 yields 8.320, and its 32-bit FP representation is, as described above, 41051EB8H - identical to the representation of 8.3199996. When you ask for that number with seven digits after the decimal, you get what you got, and it's correct within the well-known limits of floating point.

So, you got exactly the result that you could expect with a 32-bit floating point engine. There are no optimization quirks, no bugs, no problems with the compiler, and everything is kosher.

jm478:
I'm well aware of how computers do arithmetic ...

That's great, because it means that you can work out for yourself exactly how this result is calculated, and quantitatively demonstrate in this forum what, if anything, is wrong.

jm478:
Now I don't know much AVR assembly language, but this surely doesn't look like floor() is being called... let's not even talk about the multiplication and division operations being performed!!!

We can't tell what code led to the .s files you quoted, but, assuming that it's the code from your original post, there are some good reasons why the program wouldn't call floor(), or do any other arithmetic. You define x, and never change it. The optimizer might well decide, correctly, that x makes more sense as a constant. That would make x1000.0, float(x1000.0), and float(x1000.0)/1000.0 into constants as well. It's likely that the compiler did the math itself, and just plugged the results - in 32-bit floating point - into the output code. For this program, that's valid optimization, without fault.

Summarizing: You gave the program a number - 8.3199996 - that it finds indistinguishable from 8.32, asked it to distinguish between them, and complained when it couldn't tell the difference. If you need to reliably discern differences between numbers that differ in the eighth significant decimal digit, you've selected the wrong platform. The Arduino does other things very well, but it doesn't claim to be a floating point calculation engine - in fact, it claims that it's not. If you have to get exact results using floor() for every possible number, then floating point isn't your vehicle either - a 64-bit floating point engine will show the same kinds of anomalies, just less frequently and further downstream from the decimal point. Maybe you can buy or program something to work in BCD. The fault, dear Brutus, is not in your compiler, but in yourself.

Summarizing some of the histrionic statements that have been made in this thread:

Seriously now, this is one EGREGIOUS compiler error... HOW can one trust ANYTHING coming out of such a compiler? And more importantly, what is the solution? Working with the optimizer turned off all the time? If so, where can I purchase an additional 512KB of flash for my Arduino? ;-/

Full of sound and fury, signifying nothing.

So, you got exactly the result that you could expect with a 32-bit floating point engine. There are no optimization quirks, no bugs, no problems with the compiler, and everything is kosher.

All kosher, except of course the OP's original expectations. Often when a one's expectations are not met one cries foul (or bug, or not fair, or whatever). Thanks for the detailed explanation. I've always avoided floating point math if at all possible when using microcontrollers. I usually find I can just use longs integers with a little scaling magic and don't have near the suprises one can have with FP variables and calculations.

NaN my ass. :wink:

Lefty

Unless you are going to turn off all builtins (-fno-builtin), or just the floor builtin (-fno-builtin-floor), the compiler will always optimize the floor function if the argument is constant. As I and others have mentioned, on Arduino, since doubles are hardwired to be the same as 32-bit, it will act like the floorf function on your workstation.