millis() accuracy

Looking at the millis() implementation in wiring.c, I was wondering about the following edge case: suppose timer0_clock_cycles contains 15900 and TIMER0_OVF_vect fires once on a 16 MHz Arduino. So timer0_clock_cycles jumps from 15900 to 32284.

According to the logic in TIMER0_OVF_vect, the timer0_millis value will be incremented twice, returning with timer0_clock_cycles set to 284.

Now suppose delay(2) is called in loop() right after a TIMER0_OVF_vect interrupt occurred. Doesn't that mean that delay will return one millisecond later (just over 1024 microseconds, actually) instead of at least 2, as was probably expected?

I think the problem comes from running the timer interrupt slightly slower than 1 KHz, so once every second a timer "tick" will appear to occur twice in rapid succession, as the logic in TIMER0_OVF_vect adjusts (correctly) for the rate difference.

Wouldn't it be better to run let the timer overflow at 250? Or if that's not possible because of PWM, perhaps run it at twice the current rate? That way the jitter would drop from 100% to 50% max. Then again, that's still a lot of jitter when you need good millisecond timing...


Just my opinion, but if you need better timing than the default implementation, you're free to disconnect and reimplement the whole TIMER0 affair.

However, most people want a reasonable timer that is very easy to query and takes a minimum of system overhead. Doubling the timer frequency means doubling the overhead of counting milliseconds.

If you can tweak the existing implementation so it does better without adjusting the timer/interrupt frequency, that would be good to hear.

As halley said, if you need better timing, don't use millis().

Letting the timer tick faster would increase the timing overhead (it takes 9us to execute the interrupt routine, but this could be greatly improved), but it would also cause problems with using delayMicroseconds(), for example. This routine turns off interrupts and so if one is waiting more than the timer's interval, it would cause incorrect reading for millis().

I have a (maybe) related problem that’s driving me nuts. It seems that each call to delay adds almost exactly 1ms of overhead. for example, this

void loop(){
  unsigned long begint=millis();
  for (int i=0;i<100;i++){
  unsigned long endt=millis();

consistently returns results like 608. The 8 ms seems like reasonable overhead for 100 loops but the extra 1 ms/delay() seems unreasonable.

@bill2009 - No, that's probably "by design". As of Arduino 0013 (I think), the delay(N) always takes at least N milliseconds. What happens is that delay first waits for a millisecond transition, and then counts N more transitions. Since you just passed a transition when exiting the loop, the next delay will skip the remaining part of that millisecond before counting to 5. Hence 6 per iteration.

I find the "8" more surprising, actually. A 0..100 loop takes only a fraction of one millisecond.

Anyway, it would be nice if we could reduce this sort of indeterminism.

Does your test always return 608 - never 607 or 609?

As I said above, if you need higher accuracy, don't use millis(). The way millis(N) is written, it waits between N-1 to N+1 milliseconds, as jcw noted. That's usually ok for some dirty timing.

@mekon83 - Yes, good point. Still, we can try to improve what we have (which starts by understanding it better).

One way to avoid the 5 vs. 6 ms issue, would be for delay(N) to note the current TIMER0 count, and wait for it to reappear N times. But that ignores the 1024 usec vs 1000 usec aspect (timer0 overflows at 256, not 250). So here’s another thought for delay(N), untested:

long target = microseconds() + 1000 * N;
while (microseconds() - target < 0)

IOW, drop the notion that delay() works with the jittery millisecond value, and use the new microsecond value instead.

The millisecond counter is still very useful for longer timescales, of course. As long as no-one disables interrupts too long, it’s still a good way to track time. As for delayMicroseconds() - we could have it re-enable interrupts periodically when called with an argument larger than 900 or so.

@jcw: This is a really good idea. delay() could be rewritten this way and that would give us a much more reliable delay with an accuracy of a couple tens of microseconds.

Ok, I’ve made the following two changes to hardware/cores/arduino/wiring.c:

void delay(unsigned long ms)
      if (ms < 50) {
            delayMicroseconds(1000 * (uint8_t) ms);
      unsigned long start = millis();
      while (millis() - start <= ms)

And inserted the following at the start of delayMicroseconds():

void delayMicroseconds(unsigned int us)
      if (us > 500) {
            unsigned long target = micros() + us;
            while ((long) (target - micros()) > 0)

(sorry for the re-edits of this post)

Ok, I’ve stopped the tweaking. The above changes only affect microsecond delays > 500 us and millisecond delays < 50 ms. In that range, a loop is used which should be accurate in the couple-of-microsecond scale and which does not lock out interrupts.

Thank you very much for the explanation and the code. What I’m doing is implementing a pedalling speed booster that tracks a bicycle pedal rotation and issues pulses 50X as fast(sort of like putting a card in the spokes). It’s not that great accuracy is needed but the 1ms was a real problem. I’m going to take the liberty of using the code to implement a “fdelay(float)” which will simplify my code a lot - the perils of open source.

I am assuming that if I use the 1st version of it without the delayMicroseconds then I don’t need to modify delayMicroseconds - is that fair?

void fdelay(float fms)
        unsigned long ms=fms;
      if (ms < 1000) {
            unsigned long target = micros() + 1000 * fms;
            while (target - micros() > 0)
      unsigned long start = millis();

      while (millis() - start <= ms)

Looks good to me. If you want rounding, you could use "unsigned long ms=fms+0.5;".

ah, good, thanks.

@bill2009: I'd say it is a really bad idea to do it this way, i.e. using float, since the float => int conversion and multiplication that you have there take 200us already, and if you include addition, another 60 us, that totals to 260us and your accuracy is gone =) Also, the code is 2800 byte larger just because of those 2 operations. Maybe the compiler can perform some optimizations if you use constants...

I really should test my code before posting here...

Had to make another adjustment to delayMicroseconds() to get it to work (now corrected in my earlier post). Sorry about the chatter.

Thanks again. As I was looking at how to get rid of the floats I realized I was dividing a long by 50 then multiplying it by 1000 (doh!).

I ended up with the following in my

  if (pedalperiodMS!=0){
    outperiodUS=pedalperiodMS*20; //this is (1000/boostfactor);
    onperiodUS=micros()+2000;         // end ON cycle in 2 ms
    while (onperiodUS > micros()) ; //loop til end ON period
    while (endperiodUS > micros()) ;  //loop til end OFF period

which seems rock-solid. And at over 8000 rpm I now hold the record for simulated pedalling speed.

I do have a couple of remaining questions if someone has the patience:

  • I originally lefts a delay(2) instead of a micros() loop for the 2ms fixed on period but I found I was missing serial inputs. That can't be right.
  • I noticed in your code jcw that you use "while ((long) (target - micros()) > 0)" instead of just the obvious "target > micros()". Why is that?

I found I was missing serial inputs

All I can think of is that delayMicroseconds() disables interrupts - for baudrates of say 57600 and higher, maybe this causes problems?

while ((long) (target - micros()) > 0)

That's a sneaky way to avoid overflow problems. If micros() is nearly at the limit of an unsigned long (i.e. very large), then target might end up being an overflow value (i.e. very small). The way the test is written, it will still work. Tricky stuff - I'm not 100% convinced I got it right.

Overflow is a very real issue with microseconds: it happens 4294 seconds after startup.

@bill2009: Baud rate 19200 => maximum 1920 bytes/sec => Arduino has 520us to process a byte. If you disable interrupts for more than that (delayMicroseconds does that) you will start missing bytes. It's much worse for higher baud rates

I'm not using delayMicroseconds(), I was using the original delay() and my serial was 9600 baud. Now I just call micros() and millis() a lot and it seems ok. It's probably just some simple thing I've changed and not realized it.

Thanks for the explanation about the overflow issue with micros(). I've fixed my code but it never runs for more than a few minutes at a time so it probably won't matter (famous last words).