delay vs. (millis() - timestamp > x) AND processor usage

Interesting question:

DELAY method - we can do a delay, which I think pauses everything in the code and then go increment counters to keep time without calling for longs or millis which uses more RAM. By not comparing long variables, I thought this would lower power consumption and throttle back the CPU by not doing non stop long math...

main loop()
{
delay(20); //wait 20 milliseconds
counters++; //increment timers used to keep longer times like 3 seconds
buttons(); // read buttons on weapons
watch(); // interpret data received on infared sensors that collect data on interupt port 1
}

millis() METHOD - same thing as above, but this causes a NON stop subtraction and comparsion of longs. It uses more RAM, and in my mind, it seems that the CPU would run full throttle as it would do this 1000's of times per second instead of 50 times per second as above... I could be wrong though. Please tell me if you know.

main loop()
{
if(millis() - timestamp > 20) //do this 50 times per second
{
timestamp = millis(); //reset timer
counters++; //increment timers used to keep longer times like 3 seconds
buttons(); // read buttons on weapons
}
watch(); // interpret data received on infared sensors that collect data on interupt port 1
}

See, the delay function does not keep great time because if you run other code afterwards, it may take 100ms or so, so when you come back again, it may have been 120ms instead of 20ms... but it saves a lot of RAM (which I'm near capacity even w/ common issues as storing strings in flash memory and using minimal bytes of variables). If I go to the milli() sytle, I keep better time but it would seem to use more power/batteries = more heat. I can watch interupts while in delay, so I can interpret the data immediately thereafter w/o effect. I've played two seasons of laser tag using both methods and I'm trying to figure out which is better?

keep better time() //another idea to keep time more accurately

}
if(millis() - timestamp > 20) //do this 50 times per second
{
long temp = millis() - timestamp;
temp /= 20; //in case we were out of the main loop for much over 20ms, say 100ms
timestamp = millis(); //reset timer
while(temp > 0) //not sure if I can use a for loop w/ a long
{
counters++; //increment timers used to keep longer times like 3 seconds
temp --; //
}
buttons(); // read buttons on weapons
}
}

  • I am no newbie. I designed a laser tag system using the arduino programming language and have fabricated my own boards w/ audio and the ATmega328P chip.

I thought this would lower power consumption and throttle back the CPU by not doing non stop long math..

No such thing as "throttle back" unless you ask for it.
The processor is flat-out 100% of the time.

In case you're interested, here is the source of "delay":

void delay(unsigned long ms)
{
	uint16_t start = (uint16_t)micros();

	while (ms > 0) {
		if (((uint16_t)micros() - start) >= 1000) {
			ms--;
			start += 1000;
		}
	}
}
1 Like

What if you get a 2 counters (divide by 8 & divide by 10) to get a divide by 80 function, add an RTC with 4 KHz square wave output like DS1307, and create an interrupt every 20 mS?
Then you can do your 50 Hz operation, go into power down sleep mode, wake up at next 20mS interval, do whatever, go to sleep ...
Saves on RAM since fewer calculations, get more accurate time. Battery savings ?? Depends on the logi family you select. RTC only takes 1.5mA.

Well that answers that. I kind of figured that delay had something going on in the background similar that used CPU.

No reason not to use millis() and longs more often as long as I have RAM available (using around 980byte of 2000byte now I think (have a function that shows RAM usage)

That would work, but adding more hardware is not an option since I'm beyond the prototype stage.

Time to bring the code back to the 'aint broke, don't fix' phase.