maximum delay() is 32767ms ?

Could some C++ strong typing, function overloading, and an extra version or three of delay() take care of this problem? Maybe one delay() for each of the following input types: int, unsigned int, long, unsigned long.

Of course, there's not a lot of room on the ATmega for code bloat...

-j

Could some C++ strong typing, function overloading, and an extra version or three of delay() take care of this problem? Maybe one delay() for each of the following input types: int, unsigned int, long, unsigned long.

As David explained, the problem is fundamentally that 60*1000 does not equal 60,000 here, because the result is an int (a signed 16 bit integer). There's no way to catch that at the compiler level (that I can think of, anyway); you need to promote one or both of the constants to a long (32 bit integer) to get a result of 60000.

The best you could do (I think) would be have a version of delay that accepted an int, and which would barf when given a negative delay.

I do want to look through the gcc documentation, though, because it's smart enough to treat 60000 (for example) as a long, even without an L at the end. Maybe there's an option to tell it to treat all integer literals as longs (or something similar).

In the specific case we're discussing this probably is not an answer... it was not noticing the type issues that led to the problem, after all.

But when the programmer is aware of a type issue, maybe something along the following lines would help?.....

longint k60=60;
longint k1000=1000;

delay(k60*k1000);

(Although what's in k60, k1000 could be altered, of course, I use the "k" prefix to tell myself that the variable is to be used as a konstant.)

I wonder if the function name delay was replaced by a macro that cast the parameter to an unsigned long would solve the problem.

#define delay(_val) _delay((unsigned long) _val)

where _delay is the existing function renamed

Wouldn't casting work fine? delay((uint16_t) 60 * 1000); ?

My 2 cents...

Casting should work as does appending L to the integer literals (e.g. 60L). But, of course, that only helps if you know you need to do it. I'm not sure of the best way to help people that don't know they need to do that.

If the arduino distribution had delay declared as a macro (with the actual function renamed to _delay) then this would do the cast without the user knowing anything about it. It would look exactly the same as now but would operate as a long.

If the arduino distribution had delay declared as a macro (with the actual function renamed to _delay) then this would do the cast without the user knowing anything about it. It would look exactly the same as now but would operate as a long.

Again, the problem is with the arithmetic, not with the delay() function. When you multiply the 16-bit signed integers 60 and 1000, you don't get 60000, but rather -5536. (The biggest number you can represent as a 16-bit signed integer is 32767.) Casting that result to an unsigned long doesn't get you 60000 back. You need to do something to make sure the arithmetic is carried out using unsigned longs (32-bit integers) in the first place. It's not clear there's any convenient way to do that automatically, however.

A way of helping people avoid this mistake in the first place might be to provide special delay_() functions for longer time units, and then document clearly what the maximum argument for each is. If there were a delay_minutes() or delay_seconds() function, for instance, you wouldn't need to multiply 601000 to get the delay. These special variants could easily be implemented as macros, e.g.,

#define delay_seconds (_val) delay(1000L * (_val))

It doesn't fix the problem, of course, but ought to make it less likely that people get bitten by it.

That's not a bad idea. I've also been thinking we should add things like seconds() and minutes() to go along with the millis() function.

If the arduino distribution had delay declared as a macro (with the actual function renamed to _delay) then this would do the cast without the user knowing anything about it. It would look exactly the same as now but would operate as a long.

Again, the problem is with the arithmetic, not with the delay() function. When you multiply the 16-bit signed integers 60 and 1000, you don't get 60000, but rather -5536. (The biggest number you can represent as a 16-bit signed integer is 32767.) Casting that result to an unsigned long doesn't get you 60000 back. You need to do something to make sure the arithmetic is carried out using unsigned longs (32-bit integers) in the first place. It's not clear there's any convenient way to do that automatically, however.

Casting the parameter in a macro to an unsigned long forces the compiler to do the entire calculation as a long in other platforms I have used. Are you saying that the suggestion to use a macro with the cast wont work here?

Casting the parameter in a macro to an unsigned long forces the compiler to do the entire calculation as a long in other platforms I have used. Are you saying that the suggestion to use a macro with the cast wont work here?

It might or it might not, depending on how the function is called. In particular, it won't work unless the arithmetic is being done right there in the function call. So, it would allow "delay(60*1000)" to work, but not, for instance, "int interval = 60 * 1000; delay(interval)".

Casting the parameter in a macro to an unsigned long forces the compiler to do the entire calculation as a long in other platforms I have used. Are you saying that the suggestion to use a macro with the cast wont work here?

It might or it might not, depending on how the function is called. In particular, it won't work unless the arithmetic is being done right there in the function call. So, it would allow "delay(60*1000)" to work, but not, for instance, "int interval = 60 * 1000; delay(interval)".

Tom, your comment is an example of how the compiler can generally overflow integer math, I don't think there is any way to reduce the general problem you show in your example other than educating users on the nature of type overflow.

I thought the subject of this thread was how to address overflow in the argument to the arduino delay function.

It seems to me that we help users with overflows in the arguments of functions like delay if the cast is implicitly done by calling the function through a macro. I wonder if you think that would be worthwhile.

I thought the subject of this thread was how to address overflow in the argument to the arduino delay function.

My main request when I started the thread to have the documentation improved, to explain more clearly how to use the delay() function correctly. The docs still don't indicate what the type of the argument is. A warning there about the perils of 16-bit arithmetic would also be pretty helpful.

It seems to me that we help users with overflows in the arguments of functions like delay if the cast is implicitly done by calling the function through a macro. I wonder if you think that would be worthwhile.

I agree it would probably help most people avoid the mistake I made, and to that extent it might be worthwhile. On the other hand, it doesn't help people understand what they're doing.

My main objection to the macro, however, is that the way it works is a bit unclean. The cast doesn't act on the argument as a whole, but depends on it being an arithmetic expression that can be modified before it's evaluated. Usually, you try to avoid writing macros that work that way because they lead to code that fails mysteriously. In this case, for example, the following cases do and don't work...

#define delay(_val) original_delay( (unsigned long) _val )
 
delay(60000);  //works
  
delay(60*1000);  //works

delay((60*1000));  // fails

unsigned long interval = 60*1000;
delay(interval);  // fails

So, the simplest and most common usages of the function would now work, but the reason why the last two examples fail would be hard to explain. Just as C code, all four of my examples would ordinarily be expected to produce the same result, since all four delay() arguments evaluate to the same value when taken by themselves. Of course, without the macro, all four examples would fail, but once you understood why, you'd have learned something useful and (hopefully) become a better C programmer.

Of course, without the macro, all four examples would fail,

That's not right, of course - the first example,

delay(60000);

works already. It's just the second of my four examples that the macro would allow to work.

Actually the fourth example of the macro does exactly what you expect it to do (garbage in/garbage out). And I do think an inexperienced programmer is much more likely to use the second case rather than the third.

I do wholeheartedly agree that documentation and education are the best prevention for this problem. But it has been my experience that many that are attracted to the arduino enthusiastically jump in the deep end with some playground code that is related to their needs and don't spend a lot of time reading about syntax or semantics :wink:

Anyway, the decision for the arduino team is to decide if they want to provide a little bit of a safety net here or not.

I imagine that those that read this thread are unlikly to easily fall into the trap so thanks for raising the issue

I am so confused with the delay function..
I want a delay for 84000 secs .. this piece of code doesnt work ?
can anyone help me.. is there anything wrong in the delay function..

int j;
unsigned long a;
for(j=1;j<=840;j++)
{
if(digitalRead(19)==HIGH)
{
digitalWrite(16,LOW);
digitalWrite(17,HIGH);
a=(j*100);
delay(a);
digitalWrite(17, LOW);

}
delay(100);
}

Change this...

           a=(j*100);

...to this...
         a=(j*100L);

Keep in mind that delay takes a value of milli-seconds, not second.

From your loop, it appears that you want to wait longer and longer each time though the loop. Is that true?

And this puzzles me:

I want a delay for 84000

Why do you want to wait almost an entire day?