Delay() or Not Delay(), that is the question.

Hi everyone,

I'm an adult educator by default and then post-grad study, so I'm always pondering the desirability of including stuff in training and if so, to what depth how soon. One of the first programming courses I went on, adopted a silo approach and taught each command in depth although a working program needed width (ie knowledge of many commands) before depth (a knowledge of each command's esoteric parameters). Since then, it's been interesting for me to see how curriculum requirements are a)defined and then b) sequenced into the material.

This thread raises the issue of the desirability of ever (never?) learning the use of delay. If it is desirable to learn it, should it go before the Blink Without Delay (BWD) millis() approach is learned, or what?

I suppose one school of thought is never to use delay, although a 5ms delay to let an eeprom write finish is probably a legitimate use of delay(). If your program does literally nothing but flash an LED for 10 seconds every hour, is delay() legitimate then, or is it just being lazy? It would certainly be difficult to undo that if the program requirements ever changed.

If delay() is learned first, is there a risk of the "with delay()" mindset getting too entrenched so that when it is definitely inappropriate to use it, it's difficult to learn the BWD / state machine approach?

Discuss....

Edit... a bit of background. The delay() function halts all processor activity while the delay is, well, delaying. So if you delay for 10 seconds after switching an led on, to switch it off again, your code can do nothing else. Using millis() however, the program would switch the led on, and carry on doing stuff. Among the stuff it will do, is look every-now-and-then to see if 10 seconds has expired. If not, carry on with other stuff. If it has, turn the led off, then carry on with other stuff.

Discuss....

Not here, though. This isn't a programming question. This should be in Bar Sport or somewhere else.

PaulS:

Discuss....

Not here, though. This isn't a programming question. This should be in Bar Sport or somewhere else.

Fair enough, although I thought it better than having a philosophical discussion in the other thread....

As I see it, the problem that people run into if they develop a non-trivial program using delay() is that it is very difficult to convert it to use BWoD.

Clearly BWoD makes perfect sense in simple programs.

One solution might be to make up a short demo program that shows the problem of using delay - perhaps push a button to light an LED for 5 secs and show how you can't push the button again to turn it off.

Another issue, which I think is closely connected with this, is the (understandable) tendency of newbies to bundle all their code into loop() rather than breaking it out into short functions. I reckon the BWoD concept is easier to implement when the code is properly organized, and good organization also makes it easier to grasp the BWoD concept.

...R

Delay and the blink with delay are very useful BUT they must not be seen as the correct way to write programs. Delay, as used in blink is a good way to show how to make a LED blink (by taking the pin high and low). In that context it allows you to show clearly the sequence required and the usage of digitalWrite(). The same is true of the more complex example provided by the likes of sparkfun and adafruit.

What you need to do is look at the issue of design of software and not of programming or think before you code!

Mark

It needs to be taught despite the fact there are often better approaches....

Can someone point me to a reference to this "BWD" approach? I'm a newbie.

bobmon:
Can someone point me to a reference to this "BWD" approach? I'm a newbie.

Sorry, my bad for the acronym. It's Blink Without Delay, which you can see here or load from the IDE File > Examples > 2.Digital

Edited my opening post to spell it out....

Personally, my only use for delay is for debouncing, so in a teaching context, I'd agree that it's best to start with BWD and mention delay and its uses in passing.

However, if the person seeking help is just using the arduino as a means to an end or has no interest in learning coding skills I see no harm in using delay if it gets the job done. Best to warn them up front of the pitfalls of course.

To use a carpentry analogy, sometimes you don't want to go to the trouble of creating a beautiful dovetail joint when a couple of nails will serve for mounting your bird feeder.

If it helps any, I would like to give my 2 cents.

The thing about delays and whether to use them or not depends on the code and what it needs to be doing or do.
I forgot who it was on here that said it, but a delay is a great way to do nothing. 99% of the time the delay function is not needed and this is because of two reasons.

One: The code could be doing something else (hundreds of things) in the time it takes to blink an LED every second or half second, or de-bounce the state of a button. Why be at a dead stop, when you can always come back to it later? (unless its in the setup function ;))

Two: Some functions have natural delays already, like the Serial.print function. Put a few of them in your code and you will notice a slight difference in speed. Another is the FOR loop, you can't really do anything else but whats inside it.

The only time I ever use a delay is when it just doesn't matter. If the code is not time sensitive, then why bother adding the extra variables and taking up more memory with type long or unsigned long, when a simple delay will suffice. Yes it is better practice to write code without delays but in some situations, time just doesn't matter.

That's all I have to say.

Another is the FOR loop, you can't really do anything else but whats inside it.

But, that's why blink without delay first is so important. Once you get in the mindset of not writing blocking code, you also see that things like for loops are equally problematic. Moving a servo through it's limits typically uses a for loop. But, it shouldn't. It should use a state machine and a couple of variables, and, of course, no delay.

If you approach coding from an event-driven point of view, you don't even think about writing top-down/blocking code.

I see delay() as sort of the equivalent of a 10lb hammer in any carpenter's toolbox. It can be highly effective and efficient to use, but you need to be made aware of its potential collateral effects, and it certainly isn't the only tool to bring to the job.

I imagine many carpenters use the 10lb hammer a lot early in their careers; the more skills they acquire, the less it becomes used. A true master may not even bring out at all.

I agree with PaulS, the sweep example, is another of the "how not to design code" examples. As are many of the "examples given by the hardware providers.

Mark

Both have legitimate uses so both should be learned/taught. But the pros and cons should also be learned. It can be more effective if this doesn't all happen at the same time. Start with the simpler and proceed to the more complex.

To illustrate what I mean, I will relate a story from the first programming class I ever took. The gentleman that taught it was one of the most awesome teachers I've ever had, in his ability to quickly and accurately convey concepts. He had a "downfall" though, and it was by design: He lied. He would teach a concept, and we would spend a day or two working with it, becoming good at it. Then the next lecture would start and he would say something like, "Well, I lied to you yesterday." At which point he would go on to explain how he had simplified some concept, or not told the whole story, or didn't explain some part ("just do it like this for now"), or why what he taught previously wasn't perhaps always the best approach. Essentially, this involved re-teaching some minor percentage of the material (or just elaborating on it) but the overall effect was to build on past learning and was extremely effective and also very much time-efficient. I think it also instilled a great understanding that any explanation of any interface is, of necessity, incomplete at best and there are always under-the-covers details that may or may not be important to know in a particular situation.

I'm very much in the same camp as @HazardsMind. I try to always avoid using delay(), but my current project has no fewer than seven calls -- in setup() -- where of course a bunch of initialization is going on and progress messages are displayed on an LCD. It's very much a sequential process, e.g. while I wait for DHCP to do it's thing, there's not a lot else to do anyway. I've never been overly worried about the efficiency of processes that happen once. If they happen very frequently then that is a completely different story.

the sweep example, is another of the "how not to design code" examples.

It does what it sets out to do, which is to sweep a servo through 180 degrees in each direction. It is not presented as an example of good programming practice and if it were written as non blocking code many users looking at it for the first time would be bamboozled for sure.

What would have been better would be to include 2 versions, one as now and a second non blocking version with an explanation of why it is better in some circumstances. The examples, particularly those included with libraries, are examples of how things can be done, not necessarily how they should be done. This may be a shame but it is a fact.

Robin2:
Another issue, which I think is closely connected with this, is the (understandable) tendency of newbies to bundle all their code into loop() rather than breaking it out into short functions.

I'll bundle Arduino code into loop() especially when I want to avoid the overhead of unnecessary function calls.
But big code in the past, I put sections into Class objects. Is a few 1000 lines plus libs "big"?
And I haven't been a code newbie in over 30 years.

If you're tucking code away "because it's messy" then you probably keep a neat desk too.
I'm not so retentive.

If you're talking about more than 100 lines of code then sure and while you're at it, look for all the near duplications and similar blocks that should be consolidated to make the code shorter as that would be a better use of time than just re-arranging it "neat" while adding stack use and cycles to pay for it.

For example, I just used 24 lines of code somewhere to replace what took a lot more with options to expand and all of my example is in those awful global variables your teachers warned you about, or bundled into setup() or loop(). All. 24. Code. Lines.
There's a longer version of the same with print options that's still easier to read than Dostoyevsky and it too is bundled.

I'd rather have newbs see and learn the code than give them Yet Another Black Box to slap in or onto a project and then come back with "something's wrong with this code I got".

HazardsMind:
If the code is not time sensitive, then why bother adding the extra variables and taking up more memory with type long or unsigned long, when a simple delay will suffice.

Often I start one sketch by renaming and modifying another that has the framework all set up.

Also, and I have examples if you want, the only time you need to use unsigned longs for timing is when you need more than 65 seconds with millis or 6 seconds with micros.
My "without delays" debounce routine uses bytes for time variables. I wouldn't dream of simply ignoring the pin for some set time and then if the next look read as expected, call that success any more than I'd stick a delay in a serial read process and assume the whole message has been received.

To me, the cycle rate of an MCU translates to "attention" while delay() amounts to "ignorance".

Yes it is better practice to write code without delays but in some situations, time just doesn't matter.

I use delay() in setup() now and then or in quick and dirty one-off tests, before I write the real code.
It even has reuse value as long as the next thing is also Q&D.

JimboZA:
Hi everyone,

I'm an adult educator by default and then post-grad study, so I'm always pondering the desirability of including stuff in training and if so, to what depth how soon. One of the first programming courses I went on, adopted a silo approach and taught each command in depth although a working program needed width (ie knowledge of many commands) before depth (a knowledge of each command's esoteric parameters). Since then, it's been interesting for me to see how curriculum requirements are a)defined and then b) sequenced into the material.

What you teach must depend on what your students should already know and what you know well enough to explain in simple terms.

You could make them aware that top-down code is not the only or best approach and run them through what Nick Gammon shows in his blog (I know you've seen the link, if you didn't look and bookmark then quit snoozing) as a brief intro to event driven code.

I could liken top-down code to rows of plants that you go down from one plant to the next to say, water.
I could liken event driven code to animals that you pour the water into a trough and the animals come and drink.
Either analogy breaks down fast though. A teacher could do better.

Perhaps a course could start with different simple examples to do the same thing and then develop those as new material is introduced, but only so far and then for the students final projects have them incorporate one more piece, perhaps choosing which example and showing why that example and what they did.
But that would take a, is the word syllabus? Whatever, a lot of per-work planning and testing... and some hair loss.

Maybe the biggest hidden plus to event driven code (which includes state machines) is the ease of adding and removing pieces **** if the code is written correctly, tasks and hardware management (pins, timers, etc) minimally or not at all interleaved ****.