Go Down

Topic: Improving millis() (Read 10530 times) previous topic - next topic


bens: I'd like to find a solution that allows both too.

I'm a bit confused about your function:

 unsigned long temp = timer0_overflow_count;
 if (temp == 33554432UL) // if (temp == 0x10000 / 128)
   temp = 0;
 timer0_overflow_count = temp;

Would millis() be the same as it is now (in Arduino 0011)?  In that case, wouldn't the value return from millis() go from 34359737 to 0?  Or if we used 33554375, wouldn't millis() go from 34359678 to 0?  Either way, you can't just do millis() - lastEventMillis to measure an interval.

Or am I missing something?

I was also thinking about using timer 1 for the millis(), since, as you point out, it's (relatively) easy to make it accurate.  Unfortunately, there do seem to be a lot of other uses for the timer (like the Servo library).  It seems better to take over timer 0 than timer 1.


May 21, 2008, 02:09 am Last Edit: May 21, 2008, 02:10 am by bens Reason: 1
bens: I'd like to find a solution that allows both too.

I'm a bit confused about your function:

You're right to be confused: I'm being a moron.  I was focusing solely on the fact that the overflow was happening somewhere other than on an even millisecond boundary, thereby causing that overflowed millisecond to have an erroneous duration.  It would seem my function completely neglects the much more significant problem of the overflow not occuring at 0xFFFFFFF.  I'm beginning to understand now how the workarounds just become messier and messier.  I keep feeling like there should be some supremely elegant way of doing this well, but maybe that's not the case.

- Ben


Hey, what about just giving the user a resetMillis() function that would set timer0_overflow_count to zero?  This way, the user can accomplish differential timing by resetting the overflow count and then waiting for the desired number of milliseconds to ellapse.  This lets the user keep himself safely away from the region where the overflow counter overflows.

Are there any down sides to this that I'm not taking into account?

- Ben


May 21, 2008, 10:05 am Last Edit: May 21, 2008, 12:04 pm by mem Reason: 1
I did a timing measurement on the difference in performance with the new millisecond ISR code, here are the results:

Arduino 011 version interrupt takes around 4.5us including saving and restoring registers.  Adding my code to increment the seconds counters adds 1.5us

David's proposed 012 code as posted here takes 9us. Half that time is in the while loop.

The timings were done using an HP 16500C logic analyzer.

My view is that using 1% of the processing power to service the millisecond interrupt is ok unless someone can come up with a more efficient solution.


mem: Thank you for doing that measurement.  It's much easier to make these kinds of decisions with good data behind them.  

I think it's okay to double the ISR time in order to keep millis() from overflowing until it hits the limits of an unsigned long.  I'd love to have a better solution (and I think there probably is one), but I think this is reasonable.  Feel free to take a shot at it, though.

mem: do we need to add code to the ISR to do the seconds calculations?  Or can you just base your functions / library on millis()?

bens: resetting the millis() counter seems like an extra complication that most people shouldn't need to worry about.  If you really want to do it, you can always set the internal variable directly, but that's more of a hack.  


May 21, 2008, 09:07 pm Last Edit: May 21, 2008, 09:08 pm by bens Reason: 1
Thanks for working out the timing, mem.  I was trying to approximate it in my head by visualizing the assembly, but that's not such an accurate way to go.  Still, it's pretty close to what I was expecting (my main unknown being how many times the while loop would execute).

bens: resetting the millis() counter seems like an extra complication that most people shouldn't need to worry about.

I don't know, it seems kind of natural to me, just like pressing the start button on a stopwatch when you want to begin timing something (as opposed to looking at a clock when you start and doing some quick addition to figure out what time you want to stop).  In some ways it even seems a little cleaner in that it lets you remove the often irrelevant absolute time component so you're only dealing with the relative values you care about.

Nevertheless, I completely understand if you are happy with your planned solution and I don't want to become annoying by beating a dead horse.  Interrupting for 9us every millisecond won't be a significant hardship for most people, and those people with interrupts so critical they can't spare a potential 9us delay in getting to them probably won't be using the Arduino environment anyway (or if they are they can just disable the timer0 overflow interrupt).

As I said earlier in this thread, one of my libraries requires timer0 as a 10 kHz PWM generator for motor control, which means that the 20 MHz mega168 running the library will be experiencing a 7 us interrupt every 100 us, which is a bit more burdensome to the CPU.  I'll do my best to find some sort of workaround to this problem in my library.

Really my main complaint was that the current implementation handled the timer0 overflow counter in an unsafe way, so if that gets fixed I can be totally happy (assuming there are no other unsafe, non-atomic operations out there that need to be fixed in a similar way).

- Ben


do we need to add code to the ISR to do the seconds calculations?  Or can you just base your functions / library on millis()? 

For most applications I can think of it won't matter much, but feel free to include my code if you want.

But because time (millis or seconds) may be requested frequently, its best to consider a solution such as Ben's proposal that avoids fiddling with interrupts to protect access to variables modified in the ISR.


It may be the accuracy of the 16MHz crystal which causes the 1hour counter below to lose 'a few seconds per hour'. The spec sheet for the 8MHz processor shows some temperature dependence, for example dropping to 7.9MHz if ambient temperature rises from 25C to 50C. In an ordinary location one might expect a couple of degrees of temperature variations, about 0.1% clock frequency change, or a couple of seconds per hour.

If this is the fault then you software guys will not fix it within the c code. A warm finger on the oscillator box should demonstrate measurable change over five minutes if I'm right and it is temperature dependence. We all want the sw to be glitch free as well, so carry on.

Locking to Mains was a fair idea, but beware that although here in Australia they average 50.00 Hz over the month, it wanders around from minute to minute between 49.9 and 50.1 Hz (which is the main signal to open or close a steam valve at the power station). Worst variations www.nemmco.com are 49.7 to 50.3 Hz
;D ;D


It may be the accuracy of the 16MHz crystal which causes the 1hour counter below to lose 'a few seconds per hour'.

FWIW, I have been getting an accuracy of a few seconds per day on two Freeduios I have tested over the last few weeks. Loss of a few seconds per hour on a crystal clocked board at room temperature is most likely caused by software frequently disabling interrupts.


I was surprised to discover that the timing error introduced by disabling interrupts in the proposed 0012 delay function is almost insignificant. I had expected that many seconds per day would be lost where code called delay() in a tight loop. I found the actual error to be around a half second in 24 hours, which is of the same order as the error in the typical arduino crystal oscillator and in my opinion not significant.

I had thought that a guard variable would be required of the type proposed by Ben in an earlier post. So I ran a test to try and verify the performance without the guard:
Wiring.c modified as per the mellis post
Code: [Select]

  unsigned long millis()
     unsigned long m;
     uint8_t oldSREG = SREG;
     m = timer0_millis;
     SREG = oldSREG;
     testCount++;        // added for this test to count total calls to this function
     return m;

A test sketch called the following function in a tight loop to detect when the next second elapsed.
Code: [Select]
unsigned long now()
 while( millis() - prevMillis >= 1000){
   prevMillis += 1000;
 return seconds;

The test sketch sent the number of elapsed seconds and the millis testCount values to a remote PC over the serial port. (code to sync the second counts at the start of the test and display the ongoing time deviation is not shown )
After a run of over 24 hours, the Arduino and the remote PC were within one second of the same time.
In the test sketch, millis is called around 200k times per second. My guess is that because interrupts are only disabled for 250ns in the millis function, only 1 in 4000 calls to millis actually delay the processing of an interrupt and I estimate this delay is on average around 125ns. So with 200 calls per millisecond, a call will disable interrupts 4000/200 times or every 20ms. A delay of 125ns every 20ms is an error of 0.00000625, around  a half second per day.  

I am still surprised that the error was not greater and would be interested if someone could verify my analysis.


Wow.  Thanks for doing this test and the comprehensive writeup.  Your reasoning sounds reasonable to me, although I didn't check it carefully.  In any case, I'm definitely looking forward to having a millis() function that doesn't overflow every 9 hours.  :)


Mem, I think your analysis is overestimating the problems caused by disabling interrupts for a brief period of time.  The effect of this disabling is not cumulative.  If an interrupt happens while interrupts are disabled, the interrupt flag for that event still gets set, and the interrupt is entered as soon as the global interrupt flag is reset.  The only time you run into problems is if a second interrupt happens of that same type before the global interrupt flag is re-enabled, at which point you have missed an interrupt permanently.

For example, imagine you have a timer overflow interrupt occuring every 100 us, and then an external interrupt occurs that takes 250 us to process.  This will cost you two timer overflow interrupts for every external interrupt, and you will be losing valuable counts if indeed your timer overflow interrupt is maintaining a counter.  If, on the other hand, your external interrupt only takes 90 us to process (and there are no other interrupts or interrupt-disables in your program), your timer overflow interrupt will never* lose any counts.  This is because while the external interrupt might delay your timer overflow interrupt by 90 us, the timer overflow interrupt will always be triggered before the next one occurs, and hence it will always be able to get back into sync.  Now your overflow-maintained count might be off by 90 us at any given time if its update has just been delayed by an external interrupt, but this is not a cumulative effect, since even though one interrupt was delayed by 90 us, the next interrupt will happen at the scheduled time (10 us after the previous one rather than 100 us later).

Does this make sense?  I feel like I'm probably not explaining this very well.

- Ben

* My statement about never losing counts could be false under one condition: the external interrupt has higher priority than the timer overflow interrupt, and multiple external interrupts occur in a row that effectively produce one interrupt that is longer than 90 us.



...does this make sense?

Yep, my mind was elsewhere. I was so focused on finishing and testing my date and time libraries sitting on top of that code that I lost sight of what is actually happening.


Well, it's still good that you verified that no overflows are being skipped (or if so, it's an incredibly small number of them) :)

- Ben


Jun 07, 2008, 06:29 am Last Edit: Jun 07, 2008, 06:30 am by dcb Reason: 1
Just an observation, I'm not sure of the intended audience for millis(), but I needed better resolution and overflow detection, and wanted to keep on processing without sitting around counting individual clock cycles or sorting out timers and their restrictions and interactions.  So I've been working on something like the following in arduino (which is probably going to break in 0012).  I know it isn't exactly microseconds, and this is just example, not glitch free, but whatever.

at the start of a timed event call:
unsigned long microSeconds(){
 return (timer0_overflow_count << 8) + TCNT0) * 4

at the end of an event call:
unsigned long elapsedMicroseconds(unsigned long startMicroSeconds ){  
 unsigned long msec = microSeconds();  
 if(msec >= startMicroSeconds)  
   return msec-startMicroSeconds;  
 return 4294967295 - (startMicroSeconds-msec);  

so as long as the event is less than 71.5 minutes, it works.  elapsedMicroseconds can recognize that a rollover occured and give a reasonable response.

Go Up