Compiler math bug 2*60*1000 = -11072

Hi all,

I just found a bug that cost me some headache because I would have never thought this calculation could be executed the wrong way. Here is the code:

#define MENU_TIMEOUT_DELAY (2*60*1000)

void setup()
{
    Serial.begin(9600);
    int var = MENU_TIMEOUT_DELAY;   // this should result in 120000 but does -11072
    Serial.println(var);
}

void loop()
{
}

I tried this on 3 different compilers and the result is always correct but not on the Arduino.

Best Markus

try with a long.

int +/- 2^15 long +/- 2^31

#define  MENU_TIMEOUT_DELAY (2.0*60*1000)

void setup()
{
    Serial.begin(9600);
    long var = MENU_TIMEOUT_DELAY;  // this should result in 120000 but does -11072
    Serial.println(var);
}

void loop()
{
}

That isn't a bug, that's sixteen bit signed arithmetic.

Compiled for a Due you would get your expected result.

#define MENU_TIMEOUT_DELAY (2UL*60UL*1000UL)
int var = MENU_TIMEOUT_DELAY;

Squeezing seventeen bits worth of number into sixteen bits is never going to work.

.. or

#define  MENU_TIMEOUT_DELAY 2*60*1000

void setup()
{
    Serial.begin(9600);
    long var = (long) MENU_TIMEOUT_DELAY;   // this should result in 120000 but does -11072
    Serial.println(var);
}

MEL69: Hi all,

I just found a bug that cost me some headache because I would have never thought this calculation could be executed the wrong way. Here is the code:

You used an INT which on the Arduino is 16 bits. Since an INT is signed, it can hold from +32767 to -32768.

Trying to store a value of 120000 in a 16 bit signed int leaves you with a value of 120000 mod 65536 or 54464.

But since the int is SIGNED, a value greater than +32767 is a negative number. What number? 54464 - 65536 = -11072.

The solution to your problem is to use a larger variable... try uint32_t.

By the way, that "65536" number is 2^16 (16 bits).

Range for unsigned int is actually 0 to (2^16)-1, or 65535, or 0xFFFF. Similarly unsigned long os 0 to (2^32)-1, easier to write as 0xFFFFFFFF.

CrossRoads: Range for unsigned int is actually 0 to (2^16)-1, or 65535, or 0xFFFF. Similarly unsigned long os 0 to (2^32)-1, easier to write as 0xFFFFFFFF.

Very true. But I didn't say UNSIGNED INT. I said INT.

And, when trying to figure out how much a 16 bit int (or register for that matter) will roll over, you deal with 65536, not 65535.

By the way, that "65536" number is 2^16 (16 bits).

No, that’s 216 = 65536

2^16 = 18
This here’s a C forum, stranger. :wink:

Compilers have been around a long time now. It is vanishingly unlikely that it would have a simple error like this. Default assumption should be that the compiler is correct and I am wrong.

…R

Robin2: Compilers have been around a long time now. It is vanishingly unlikely that it would have a simple error like this. Default assumption should be that the compiler is correct and I am wrong.

...R

The voice of reason.

+2 karma

Default assumption should be that the compiler is correct and I am wrong.

I don't know about that. I don't assume that you are wrong, unless you disagree with me.

PaulS:

Default assumption should be that the compiler is correct and I am wrong.

I don't know about that. I don't assume that you are wrong, unless you disagree with me.

karma++

you had me laughing!

Thanks for the replies! It looks like I messed up my example, the actual code is this

...
static unsigned long menuTimeout = 0;
bool inMenu(LiquidCrystal& lcd)
{
    if(millis()- menuTimeout > MENU_TIMEOUT_DELAY)
    {
...

I then added a Serial.print(MENU_TIMEOUT_DELAY) to inspect the math of the define label. So if you run this you will get interesting results:

#define MENU_TIMEOUT_DELAY (2*60*1000)
#define MENU_TIMEOUT2_DELAY (2L*60*1000)

void setup()
{
    Serial.begin(9600);
    long int var1 = MENU_TIMEOUT_DELAY;
    unsigned long int var2 = MENU_TIMEOUT_DELAY;
    float var3 = MENU_TIMEOUT_DELAY;
    Serial.println(var1);   //  print -11072
    Serial.println(var2);   //  print 4294956224
    Serial.println(var3);   //  print -11072.00
    unsigned long int var4 = MENU_TIMEOUT2_DELAY;
    Serial.println(var4);   //   print 120000
}
void loop()
{
}

The compiler assumes 16 bit for the #define expressions - that is different from all other compilers I am using.

Markus

The compiler assumes 16 bit for the #define expressions - that is different from all other compilers I am using.

It's platform-dependent. Presumably, the other compilers were for 32 bit platforms, like PCs.

MEL69: ```

define MENU_TIMEOUT2_DELAY (2L*60*1000)

Also, since you are using mills() timers like this

if(millis()- menuTimeout > MENU_TIMEOUT_DELAY)

you really want to use unsigned long, as it will help you later on down the road:

#define MENU_TIMEOUT2_DELAY (2UL*60*1000)

you can read why here.

The compiler assumes 16 bit for the #define expressions

No, it doesn't. The [u]compiler[/u] has no idea that the [u]preprocessor[/u] did a string substitution.

The fact that int math was used has NOTHING to do with the fact that you used a #define to create a name/value relationship.

Wouldn't the problem be solved with

const unsigned long MENU_TIMEOUT2_DELAY = 2 *60 * 1000;

or would you need to include UL among the numbers as in

const unsigned long MENU_TIMEOUT2_DELAY = 2UL *60 * 1000;

...R

Wouldn't the problem be solved with

No. The type of variable that is to hold the result is irrelevant. There needs to be a UL on the right of the = to tell the compiler to use a long register type for the intermediate results. It is the fact that the compiler thinks that it can use an int register that is the root of the "problem".

Actually, the problem is that some people are not aware of how the compiler generates code, so they write code that does not generate the results they expect.

PEBKAC.

Robin2: Wouldn't the problem be solved with

const unsigned long MENU_TIMEOUT2_DELAY = 2 *60 * 1000;

or would you need to include UL among the numbers as in

const unsigned long MENU_TIMEOUT2_DELAY = 2UL *60 * 1000;

...R

const does not make a difference.

I was just trapped by the fact that:

define abc 120000 is treated as 32 bit

define bcd 120*1000 is treated as 16 bit

Something that can easily slip your attention when you modify time constants in milli-seconds and cross the bit boundary