delay with millis() question

In an example using millis() for setting a delay, I saw the following to set a delay between samples.

delay((LOG_INTERVAL -1) - (millis() % LOG_INTERVAL));

I understand setting the delay to LOG_INTERVAL, but not sure what this is doing. Why subtract 1, and why subtract the mills() modulo LOG_INTERVAL? Anyone know?

Thanks in advance, Dave

BTW I hate using delay, preferring to trigger functions based on time elapsed instead. But sometimes delay() can be useful.

Why subtract 1, and why subtract the mills() modulo LOG_INTERVAL?

More pertinently, why use delay() ?

As noted in the post I am not a fan of Delay() either, but a) it can sometimes be useful even if only as a temporary solution, and b) this is more of an academic exercise, I hate not being able to figure out what it does.

Why subtract 1, and why subtract the mills() modulo LOG_INTERVAL? Anyone know?

No idea whatsoever. Where did that code come from? Perhaps it makes more sense in-context.

It looks vaguely like it is trying to make the delays expire at specific multiples of LOG_INTERVAL, so that you get an entry at (say) 1000, 2000, 3000, etc, regardless of how much time the code in between the delays takes. But that ought to be easier to achieve WITHOUT using delay()...

Thanks for the reply, I am starting to not feel so dumb.

It came from an adafruit data logger example. From what I can tell from the code and text they are simply trying to limit the measurements to once a second. It seems to me to be a strange way to do it, but I figured it must be something I don’t understand or am missing.

Here is the relevant text from the article: “Now we're onto the loop, the loop basically does the following over and over: Wait until its time for the next reading (say once a second - depends on what we defined) Ask for the current time and date froom the RTC…”

FYI LOG_INTERVAL is set to = 1000

Here is the first few lines of the main loop:

“void loop(void) { DateTime now;

// delay for the amount of time we want between readings delay((LOG_INTERVAL -1) - (millis() % LOG_INTERVAL));

digitalWrite(greenLEDpin, HIGH);”

If the value of LOG_INTERVAL is 1000 then subtracting 1 from it seems like some sense of academic perfection than any practical value. Maybe it is intended to prevent the delay reducing to 0.

Presumably millis() % LOG_INTERVAL is intended to produce millisecond steps withing every second interval

Are these LOGs floating down a Canadian river to a pulp mill ?

...R

Its the wrong way to do regular sampling, basically.

The right way is something like:

  if (millis () - last_time >= LOG_INTERVAL)
  {
    last_time += LOG_INTERVAL ;
    .... do stuff ...
  }
  .. other stuff can happen here ..

Thanks, and I totally agree.
I normally use something along the lines of the below, but since it really does the same thing, it is more from habit and that I find this method a bit more readable (for me anyway).

If (mills() >= eventTime)
{
eventTime= mills()+delayTime;
//Do whatever it is that needs doing
}
…rinse, lather, repeat

Sometimes I will throw in some elaborate and entirely unnecessary conversions and/or calculations (e.g. convert millis() to its time/date components), just because a) I am kind of a geek, b) I can’t leave well enough alone, but mostly c) sometimes I do dumb things because I did not think it through first (ask my wife). Making me wonder if the original code in question is maybe the result of someone else like me? (the poor fool) :slight_smile:

You know this expression can go wrong?

If the loop is not that fast (aka, it can miss a ms) and millis() + delayTime becomes 4,294,967,295 (or close to that) the equation mills() >= eventTime can be missed.

With if(currentMillis - previousMillis > interval) you don't have that minor problem.

Its a crap way to flash the green LED once per second?

However like mentioned, if the workload time exceeds 1 second, it won't flash for a further second, so the time between the flash will be 2 seconds.

millis() % LOG_INTERVAL gives us the remainder of seconds passed, so if millis() returned 1544, then this part would be equal to 544.

The next part subtracts this remainder from a full second, or .999 of a second, as the % operator will only ever return 999 maximum.