The merits of delay() vs millis()

I'm still reading that thread and am amazed at how much intellectual time is lost on this discussion :slight_smile:

I'm surprised low power and sleep modes did not come yet into it

1 Like

The problem is that when an experienced programmer uses delay they know (hopefully) the consequences and are acting with those consequences in mind. A beginner is yet to find out. They have to learn somehow, it's just a shame that they find out by posting 1000 lines of code with 3 hours total delay built in and ask 'why is my code not very responsive'. Better they find out with only 20 lines of code.

3 Likes

here is a post to feed the beast :cold_face: :innocent: :beers:

Let’s introduce a completely new term - LATENCY

Not only sluggishness responding to internal events, but the responsiveness tip-off user inputs.

But there are plenty of times when that never happens. Most of my early Arduino projects made use of delay(), and most of them still meet the requirements I had for them.

Not all code evolves into a complex multi-tasking application.

1 Like

No, you are right but that misses the point; a specific project might work perfectly well using delay but the next one or the one after that won't. By making one project work with delay someone is learning that using delay is perfectly OK. Until it isn't.

Clearly delay()s in the hundreds of milliseconds are likely to cause problems with sluggish responses to user inputs. But, I think we should acknowledge that just using millis() isn't a universal cure for that. For() loops are notorious for blocking the CPU if you are ploughing through a long array of data, processing each entry, for example. It's an easy mistake to make, even with an elegantly architected application that uses millis() and an FSM.

Realistically, the best we can create using millis() etc is co-operative multi-tasking. Perhaps we should tell newbies not to use Arduino at all, and go straight to FreeRTOS with its pre-emptive multi-tasking?

Trouble is, often someone will write a fade for loop (0...255) with a short ten or twenty millisecond delay...

I think you are being a bit unfair on the newbies - we shouldn't assume they are that dim. When I was a newbie it wasn't exactly a giant intellectual leap to appreciate the blocking nature of delay() and thus when not to use it.

And now, several years later, I still use delay() occasionally.

As I say, for() loops are notorious blockers anyway, even without delay()s in them. Responsive apps require a lot more than just dumping delay() for millis().

I don't assume they are dim, although some obviously are! It's quiet clear from helping lots of them that some 'get' multi tasking, non blocking code pretty much as soon as you introduce the concept, and some never will.

2 Likes

? ? ?
Either you don't trust the compiler to optimise these correctly or you are in the habit of filling the for loop body with lots of time consuming stuff. Anyway, I've seen enough of this thread.

Don't worry, there will be another one along on the same subject in a few months!

1 Like

Yes, yes. I think I see what y'all are getting at.

When the programming gets difficult, a little wine can help.

When the wine gets to be too much help, I crawl off to bed and an automatic sleep mode is induced, low power, very.

Time wasted, inert, under- or perhaps mis-utilised.

I am awakened by interrupt: my clock is unstable for a few hundred thousand cycles.

Perhaps a delay of substantial parameter is required.

a7

I'm just pointing out that we shouldn't just be telling newbies that delay() is evil and millis() is good. Even using millis() there are plenty of places where we might end up blocking the processor unintentionally. As you quite rightly say, we must pay attention to for() loops, as well as any repeated calls to the numerous blocking functions in the libraries.

Recently I developed an application using all the right architecture, millis() timers, multiple FSMs, only to suffer a major problem with dropping incoming data. I found out that the library I was using to drive the LCD display took almost 100ms to clear the screen and write one line of text!

So, it's more complicated than dumping delay() for millis(), and yet it still remains true that delay is fine in some circumstances. So long as you understand the implications and limitations.

I don't disagree! I feel we've all had an excellent opportunity to air our thoughts and listen to others. It's been fascinating (although admittedly a little repetitive at times!). :grinning:

1 Like
while(delay_vs_millis_unresolved)
3 Likes

OK. Here. This should be the contents of loop():

#include <avr/sleep.h>
void loop() {
    sleep_mode();
}

Also, any unconnected I/O pins should have pullups enabled (no floating pins!), digital input should be disabled on analog pins used for analog input. The PRR register can be used to to disable on chip devices that consume power and you aren't using. And of course you can save power by lowering the clock speed. Of course you still have the power hungry USB interface and on some boards linear voltage regulators that waste power.

1 Like

I'm just pointing out that we shouldn't just be telling newbies that delay() is evil and millis() is good.

Using delay is a bad habit on MCU's in general. The more one does it, the harder it is to break. Limiting work-arounds will be used first as the stubborn denial sets in.

Using non-blocking code is a good habit and good practice doing it! One can achieve what was previously unknown.

So, it's more complicated than dumping delay() for millis(), and yet it still remains true that delay is fine in some circumstances. So long as you understand the implications and limitations.

Of course Beginners will know the limitations and complications so it's okay?

Use the un-delay method, it is cut&dry simple, guaranteed to work if you don't screw it up playing helpless victim.

Delay is fine in some circumstances until it becomes a habit, like using an int where a byte will do since there's way more RAM than this sketch needs. 20 sketches later, doing the same, after 2 days or more of random debugging we get to see if we can help fix the code on the forum.

Even using millis() there are plenty of places where we might end up blocking the >processor unintentionally.

Getting code wrong is no argument for not debugging and learning better.

Recently I developed an application using all the right architecture, millis() timers, >multiple FSMs, only to suffer a major problem with dropping incoming data. I found >out that the library I was using to drive the LCD display took almost 100ms to clear >the screen and write one line of text!

That's what you get for using miilis? Your "millis ain't so great" rhetoric includes that?

I made a loop counter task for developing non-blocking code. If you make a change and your count goes from 67KHz to 990Hz, finding the goof should be easy.

After I learned to walk, I learned to use the potty and clean my own butt.
But really up until then, having someone else do it seemed fine. And then I knew better.

2 Likes

Of course you still have the power hungry USB interface and on some boards linear voltage regulators that waste power.

Wait till you get to hacking stand-alone AVR's.
Boardless Arduinos, be sure to see the Evolution