The merits of delay() vs millis()

We are into semantics here, what does '100% CPU utilisation' mean? The CPU doesn't care, it asks the memory for the next instruction, gets the instruction, does want it says and asks for the next instruction. Whether those instruction do anything YOU think is useful or not is above the CPU's pay grade. Whether it's waiting in a delay loop or checking to see if there is anything more useful to do and mostly finding there isn't is of no concern to the CPU.

Yes, exactly, and honestly I think we all understand the point. "Utilisation" and "waste" can both be contentious terms and mean more to humans than CPUs.

My point remains: sitting in delay() and sitting in loop() are all the same to the CPU: in both cases it is waiting for some condition to be met. Which one we use depends on what condition(s) we are waiting for.

Neither is "better", neither is "evil", they are simply appropriate or inappropriate for the job.


I think inappropriate is worse than appropriate.

Leading people down a path that forces them to rewrite their whole code
when it has to do more than one thing is evil.

1 Like

I'm still reading that thread and am amazed at how much intellectual time is lost on this discussion :slight_smile:

I'm surprised low power and sleep modes did not come yet into it

1 Like

The problem is that when an experienced programmer uses delay they know (hopefully) the consequences and are acting with those consequences in mind. A beginner is yet to find out. They have to learn somehow, it's just a shame that they find out by posting 1000 lines of code with 3 hours total delay built in and ask 'why is my code not very responsive'. Better they find out with only 20 lines of code.


here is a post to feed the beast :cold_face: :innocent: :beers:

Let’s introduce a completely new term - LATENCY

Not only sluggishness responding to internal events, but the responsiveness tip-off user inputs.

But there are plenty of times when that never happens. Most of my early Arduino projects made use of delay(), and most of them still meet the requirements I had for them.

Not all code evolves into a complex multi-tasking application.

1 Like

No, you are right but that misses the point; a specific project might work perfectly well using delay but the next one or the one after that won't. By making one project work with delay someone is learning that using delay is perfectly OK. Until it isn't.

Clearly delay()s in the hundreds of milliseconds are likely to cause problems with sluggish responses to user inputs. But, I think we should acknowledge that just using millis() isn't a universal cure for that. For() loops are notorious for blocking the CPU if you are ploughing through a long array of data, processing each entry, for example. It's an easy mistake to make, even with an elegantly architected application that uses millis() and an FSM.

Realistically, the best we can create using millis() etc is co-operative multi-tasking. Perhaps we should tell newbies not to use Arduino at all, and go straight to FreeRTOS with its pre-emptive multi-tasking?

Trouble is, often someone will write a fade for loop (0...255) with a short ten or twenty millisecond delay...

I think you are being a bit unfair on the newbies - we shouldn't assume they are that dim. When I was a newbie it wasn't exactly a giant intellectual leap to appreciate the blocking nature of delay() and thus when not to use it.

And now, several years later, I still use delay() occasionally.

As I say, for() loops are notorious blockers anyway, even without delay()s in them. Responsive apps require a lot more than just dumping delay() for millis().

I don't assume they are dim, although some obviously are! It's quiet clear from helping lots of them that some 'get' multi tasking, non blocking code pretty much as soon as you introduce the concept, and some never will.


? ? ?
Either you don't trust the compiler to optimise these correctly or you are in the habit of filling the for loop body with lots of time consuming stuff. Anyway, I've seen enough of this thread.

Don't worry, there will be another one along on the same subject in a few months!

1 Like

Yes, yes. I think I see what y'all are getting at.

When the programming gets difficult, a little wine can help.

When the wine gets to be too much help, I crawl off to bed and an automatic sleep mode is induced, low power, very.

Time wasted, inert, under- or perhaps mis-utilised.

I am awakened by interrupt: my clock is unstable for a few hundred thousand cycles.

Perhaps a delay of substantial parameter is required.


I'm just pointing out that we shouldn't just be telling newbies that delay() is evil and millis() is good. Even using millis() there are plenty of places where we might end up blocking the processor unintentionally. As you quite rightly say, we must pay attention to for() loops, as well as any repeated calls to the numerous blocking functions in the libraries.

Recently I developed an application using all the right architecture, millis() timers, multiple FSMs, only to suffer a major problem with dropping incoming data. I found out that the library I was using to drive the LCD display took almost 100ms to clear the screen and write one line of text!

So, it's more complicated than dumping delay() for millis(), and yet it still remains true that delay is fine in some circumstances. So long as you understand the implications and limitations.

I don't disagree! I feel we've all had an excellent opportunity to air our thoughts and listen to others. It's been fascinating (although admittedly a little repetitive at times!). :grinning:

1 Like