ATTiny85 Timer/Counters and Delay Functions

Rather than being specifically about code, this is a library question. If this is the wrong forum, please let me know.

Can anyone tell me how "delay()" and "delayMicroseconds()" are implemented for the Attiny85 when running under an Arduino bootloader? I need to know if these functions are leveraging Timer/Counter0 or Timer/Counter1.

Thank You

Not personally, but @DrAzzy might.

1 Like

More or less the same way they're implemented on an ATmega328p; Timer0 is set to interrupt (approximately) every 1024ms, and a tick and millisecond counter are incremented appropriately.
You can see all the gory details in the source code (if you're using Dr Azzy's ATtinyCore (and you probably should be), thing may get a bit obscure because of the way a single file is set up to support MANY different CPUs and clock rates.)

(Oh: the bootloader has nothing to do with millis() By the time your sketch runs, anything the bootloader has done should have been completely reset. (perhaps depending on which bootloader.))

1 Like

Assuming you're using my core they're implemented the same way they usually are. Delay is two nested loops, the outer one records micros and then enters the inner loop, which checks micros until it's >= 1000 and then subtracts one from the remaining millis of delayand adds 1000 to the initial micros (that's what it should be doing at least, I;ll be sure to make sure that's what it's actually doing for 2.0.0. Thus, if interrupts fire in a delay, they do not change the time the delay takes to run, as long as they aren't so long that they break millis so if you have a delay(1000) and during that time an ISR fires 1000 times taking 50 us each time the delay will still end about 10 seconds after you called it. If you violate the conditions documented for proper timekeeping and instead have a terribly written ISR that fires only ten times in that 1000ms, but each of those takes an insane 5ms to return) millis will lose 40ms. and the delay will be more like 1040ms.

delayMicroseconds is very very different: it is a pure cycle counting loop written in assembly. that means that the time does not tick down when an ISR is running unless you disable interrupts. delayMicroseconds(1000) gets interrupted by an ISR that takes 750us to run (again, poorly written ISR extreme example for the sake of making the effect more visible) 100 us into the delay. After the ISR returns, the remaining 150 us of the delay runs.

That's what the current implementation I am using is on mTC and DxC, and I think it's the same scheme on ATTC. It's derived from the original version, with a few fixes for very short delays on delayMicroseconds, plus added assembly blocks to handle the wierd speeds we support. (the first issue happens when delayMicroseconds is called only a very small number of times - LTO will inline the function. Great - except that the time it takes for the call and return instructions is included in the delay, so when it gets inlined, one call (3 clocks on AVRxt, 4 on AVRe) and one ret (4 clocks) are lost - potentially a whole MICROSECONDS Doesn't really mattter when you delay for 100 us, but when you need a 5 us delay call time, but only gives you 4 us or 5 us depending on decisions the linker makes at the end of the compile process. The LTO fix is definitely not in a released version of ATTC (the fix is turning the classical variable delay into a non-user-visible _delayMicroseconds(), with the never_inline attribute, and then having delayMicroseconds() itself as a stub with always_inline, that uses __builtin_constant_p(delay) - if delay is constant, it passes it to the /util/delay.h _delay_us(), which is perfect, but requires the delay to be compile time constant. Otherwise it passes it to _delayMicroseconds(). The stub function gets optimizedout by LTO (which, while optional in 1.5.x, is mandatory in 2.0.0 because we can guarantee that the compiler will support it (it will pull in Azduino6 or Azduino7 version of the toolchain), and a huge nuber of the improvements in 2.0.0 depend on LTO (LTO doesn't just result in 5-25% reduction in code size. It also lets you do non-preprocessor if statements that can test if the argument is constant or not even when the constant comes from another compilation unit. That's what makes it possible to give compile errors when non-existent pins are used and the like,

A while ago someone put a huge amount of work into making sure that bizarro clock speeds had essentially zero calculation drift, because he was using 18.whatever USART clock crystals and needed them to keep time well (previously, the time was not as precise due to integer math)

The CORRECT_EXACT_MILLIS stuff makes understanding what's going on in millis harder to understand on ATTC, but It is known to work and make millis accurate on whackjob speeds

1 Like

Thanks for a detailed answer. The part I was looking for was the use of timer/counters. It looks like at least one timer is used to implement delay(). That explains some behavior I have seen

Oh, yeah - all ATTC parts use timer0, always. And the reason is because it's the same on almost every part so we can share a huge portion of the code. Timer1 is all over the place on classic tinyAVRs. varies from a standard timer1 to a copy of timer0 to one of several wacky async things.

On mTC, there's a menu to select between a TCA, TCB or TCD, and which one when there are severak, and even to sacrifice micros and use the RTC (lets you keep time while sleeping)

On DxC, All TCA, TCB are supported timing sources, but that's it. The TCD is too powerful of a peripheral to waste on that and has too many variables that go into it's speed and is generally a shitty millis timer.
I'm overall not happy with how RTC millis came out on mTC which is why it's not an option on DxCore.

DxCore's only way of doing millis that keeps time during sleep, and "sleep for XX seconds" will be manually until I get SmeepLib released, which will let you use a TCA or TCB for millis, but provide methods that will enter sleep with periodic awakening, during which we increment a wake count and leave in place flag that says return to sleep. Meanwhile in other ISRs (ie, the one you use to wake from sleep) you clear that flag. Either way when the ISR exits, execution resumes from the point when it went to sleep, which then checks the "wake up" flag and either returns to sleep, or corrects millis for the time spent in sleep, and then either resumes user code. User defined ISRs used to wake may choose not to set the wake flag if it decides the interrupt was spurious. And there are a shitload of corner and edge cases. For better or worse, sleep is simpler on classic AVRs. It;s much less well featured, but that makes it much simpler.

1 Like

This topic was automatically closed 180 days after the last reply. New replies are no longer allowed.