Timer not working longer than 32000ms

Hello, for my project i want to make a timer that turns the display off after about 5 minutes.
I made a timer and tested it with short times and that was working fine. Whenever the delay time is longer than about 32 seconds, the timer doesnt work anymore. This is my (stripped) code:

unsigned long startTime;
unsigned long timeOutTime = 5*60*1000;
bool timeOut = false;

void setup() {
  Serial.begin(9600);
}

void loop() {
  // do a lot of other stuff
  startTime = millis();
  checkTimeOut();
}

void checkTimeOut() {
  if (millis() - startTime >= timeOutTime) {
    // switch off screen
  }
}

Because it works up to 32000ms, I thought it had something to do with datatypes, since integers can go up to 32.767. However, all my data types are longs. Can someone tell me what I am doing wrong? Thanks in advance!

unsigned long timeOutTime = 5601000;

5, 60, and 1000 are NOT longs, so that calculation happens w 16bits.
Use “1000L”

So when calculating it does not necessarily make it a long if it is defined as a long? Should I change it to 5L60L1000L, or is 5601000L also fine?

Thanks for the help!

A better choice (note the U)...

    5*60*1000UL

The best choice (crystal clear to the future you)...

    5UL*60UL*1000UL

So when calculating it does not necessarily make it a long if it is defined as a long?

Your expression is not unsigned long until the assignment which, as you found out, is too late.

It's worth knowing that the above is true for 8-bit AVR arduino line Uno, Nano, Mega, Pro Micro, Pro Mini etc. On these chips, int is 16 bits and long is 32 bits. But on 32-bit Arduino like Due, Zero, Nano 33, ESP8266/32, etc, int and long are the same, 32 bits, so the problem above would not have happened. It's still a good idea to put "UL" so that your code will work on either type of Arduino.

westfw:
5, 60, and 1000 are NOT longs, so that calculation happens w 16bits.

This seems almost like cruelty on the part of the compiler writers. You would expect that constant expression to be evaluated at compile time, and the compiler to be running on a 32-bit or even 64-bit cpu, where an int would naurally be 32 bits. So they must have made a deliberate decision to make the compiler simulate the result as it would be if executed at run-time on a cpu where int was 16 bits. Perhaps there are good reasons for doing this. Can anyone think of one?

This topic was automatically closed 120 days after the last reply. New replies are no longer allowed.