Broken millis(), broken PWM on pins 5 and 6

Let's face it, millis() is pretty badly broken. It rolls over after 9h 32m 39.738s (0x20C49BA milliseconds), which means that at that time, any delay or anything else that you are doing that depends on the value of millis() is likely to break. Also, 2.4% of the time, the value of millis() increments by two instead of by one - if your application depends on knowing the difference, it will probably break.

If it isn't possible/desirable to make timer0 tick at 1KHz (or some neat multiple of same), then at least make it tick faster than 1KHz (maybe >= 4KHz) instead of slower, so that the value of millis() never increments by more than one! Then -

define CLOCK_CYCLES_PER_MILLISECOND (F_CPU / 1000UL)

volatile unsigned long millis; unsigned long clock_ticks;

SIGNAL(SIG_OVERFLOW0) { clock_ticks += NUMBER_OF_CLOCK_CYCLES_PER_TIMER0_OVERFLOW; while (clock_ticks >= CLOCK_CYCLES_PER_MILLISECOND) { // Preferably this loop should only ever execute either 0 or 1 times per overflow millis++; clock_ticks -= CLOCK_CYCLES_PER_MILLISECOND; } }

unsigned long millis() { unsigned long return_val; cli(); // disable interrupts so that the value of millis cannot update while we are reading it return_val = millis; sei(); // re-enable interrupts return return_val; }

(I'm not sure if it is necessary to disable interrupts, but I think it's a good idea).

I think the above code should give a millis() that rolls over after 2^32 milliseconds (i.e. 49d 17h 02m 47.296s). Better still, subtracting two different return values of millis() will ALWAYS give the right answer (unless the time difference between the two values is actually more than 2^31 milliseconds - i.e. 24 days and change).

Also, on an ATmega168 (and maybe others?), PWM on pins 5 and 6 is broken, in the sense that if you set a PWM value of 0, you don't get a constant 0 volts out - you get a high frequency signal with a very narrow spike in it. (This is because pins 5 and 6 use timer 0 in fast PWM mode, whereas the other PWM pins use their respective timers in phase-correct PWM mode). May I suggest that if a PWM value of 0 is being requested for pins 5 or 6, that the pin be set digitalWrite(pin, LOW)?

Better still, subtracting two different return values of millis() will ALWAYS give the right answer (unless the time difference between the two values is actually more than 2^31 milliseconds - i.e. 24 days and change).

What's the difference between a 9hr rollover and a 24d rollover? Your solution is just as badly broken as the 9hr solution in that respect - nowhere near "ALWAYS" right.

Either way, the clock isn't infinite, so you'd better code for a rollover if it matters to your application.

If you're really hardcore, you can always discipline the millis() counter with an RTC or GPS PPS signal on an interrupt pin, or even set up another seconds counter. Make that one an unsigned long and you're good for 68 years. (Note to self: unix time() ends in 2038, find a new job before then).

-j

What's the difference between a 9hr rollover and a 24d rollover?

About 24 days. And if you are not using signed arithmetic, the difference goes out to over 49 days.

Your solution is just as badly broken as the 9hr solution in that respect - nowhere near "ALWAYS" right.

Not true. With the current implementation of millis(), the return value 0x020C49B9 is followed by 0x00000000 - if you subtract those two values from each other in an attempt to find out how many milliseconds have elapsed between them, you get the totally nonsensical answer that minus 0x020C49B9 milliseconds have elapsed.

With the proposed implementation of millis(), the return value of 0xFFFFFFFF is followed by 0x00000000 - if you subtract those two values from each other, you get the correct answer that one millisecond has elapsed (as your subtraction "rolls over" in the same way that the return value of millis() rolls over) - hence, arguably, the proposed millis() NEVER rolls over.

Either way, the clock isn't infinite, so you'd better code for a rollover if it matters to your application.

There is no need to code for a roll over if your arithmetic rolls over in the same way that the return value of millis() rolls over, which is the major change I'm proposing.

If you are interested in absolute time then yes, of course you will need to watch for roll overs. But most of the time, applications are not nearly as interested in absolute time as they are interested in time intervals. And there is a big difference between having to watch for roll overs every 9 hours versus every 24/49 days (depending on whether you are using signed arithmetic), and there is a big difference between having to watch for the value 0x020C49B9 and not having to watch for any value due to the way computer arithmetic works.

If you're really hardcore, you can always discipline the millis() counter with an RTC or GPS PPS signal on an interrupt pin, or even set up another seconds counter. Make that one an unsigned long and you're good for 68 years. (Note to self: unix time() ends in 2038, find a new job before then).

My application isn't interested in absolute time - it is interested in time intervals.

If an app does need absolute time, there is a method you can use that I posted here that rolls over in 70 years. Its convenient to set the _time variable to unix time (it then counts seconds since 1970 as per unix) but it will then roll over in 2038.

I am working on a library that uses this as a kind of a software real time clock, and will post it for those that are interested in real time when I find some real time to finish it.

I'd definitely like to fix millis(). Thanks for posting the code, chris. It seems like a good possibility. I've also been considering trying to move millis() to timer 1 and attempt to configure it such that the interrupt is called exactly once per millisecond. Any idea if that's possible? Also, do you have an idea of how many instructions your handler code takes (vs. the current one)?

I've also been considering trying to move millis() to timer 1 and attempt to configure it such that the interrupt is called exactly once per millisecond. Any idea if that's possible?

It's possible - however, I can't see how you would be able to continue to use timer 1 to generate useful PWM signals on pins 9 and 10.

Also, do you have an idea of how many instructions your handler code takes (vs. the current one)?

I find AVR Studio is a great way of finding out - just single-step through the code counting the cycles it takes.

Doing so, it seems to take 56 clock cycles when the millis counter isn't incremented, and 113 when it is (once!). That compares with 38 clock cycles for the current ISR. So even when the millis counter is incremented, the new code takes no more than 7.1 microseconds to process the interrupt.

Cool, thanks for doing the calculation. It certainly seems worth sacrificing half a percent (or so) of processing power to provide a longer-lived and easier to handle millis() value. This (or similar) will probably go in Arduino 0012.

It certainly seems worth sacrificing half a percent (or so) of processing power to provide a longer-lived and easier to handle millis() value.

I see where you are coming from...

Changing the clock_ticks variable from unsigned long to unsigned int (which should always work, as 1 millisecond is 16,000 clock ticks, below the 32,767 max that an unsigned int can take) reduces the time the ISR takes to 38 clock cycles when millis isn't incremented (i.e. the same as the current ISR), and 77 clock cycles when millis is incremented. So it's an extra 39 clock cycles over and above what's already there every millisecond, or about 0.25%.

Still, I think it's a good idea to have timer0 overflow at least twice every millisecond. If it were to overflow 4 times a millisecond, the total hit on performance would be 191 clock ticks per millisecond, or 1.2%.

What's the advantage of having it overflow more than once per millisecond? Better accuracy?

What's the advantage of having it overflow more than once per millisecond? Better accuracy?

You get less jitter.

If it overflows 0.95 times per millisecond, then once every 20 milliseconds it will take two overflows to increment millis() - i.e. the time interval between two increments will be 1.9 milliseconds - almost twice what it should be. If it overflows 0.15 times per millisecond, the worst that can happen is (I think) 1.1 milliseconds between increments.

If it overflows 0.95 times per millisecond...

Even worse is if it overflows 0.55 times per millisecond - once every 20 milliseconds, the interval between increments will be 0.55 milliseconds - about half what it should be. This could break some peripherals that need at least a millisecond delay.

Makes sense. Any thoughts on the best balance between minimizing jitter and minimizing CPU usage?

Anyone else have any use cases in which the accuracy or jitter of the millis() function is particularly important?

Any thoughts on the best balance between minimizing jitter and minimizing CPU usage?

Hmmm - looking at the datasheet, it looks like the best we can do is to set a prescale of 8 - this means that timer0 would overflow every 2,048 clock cycles, which gives an average of 7.8 overflows per millisecond. The next higher prescale is 64, which gives less than one overflow per millisecond, meaning that millis() could jump by two in just over one millisecond.

Timer 2 isn't much better - it has a prescale of 32, giving a dangerous 1.95 overflows per millisecond, which would mean that there would be times where the return value of millis() would increment in just over 0.5 milliseconds.

So it looks like timer0 with a prescale of 8, giving 2,048 clock cycles per overflow, is the best we can achieve. Using the code below, the time penalty goes to about 318 clock cycles per millisecond, or just under 2%.

volatile unsigned long millis; unsigned int clock_ticks;

SIGNAL(SIG_OVERFLOW0) { clock_ticks += 2048; if (clock_ticks >= 16000) { millis++; clock_ticks -= 16000; } }

I'm sure if someone wants to attempt to hand-write the ISR in assembly language, that time penalty can be brought down, possibly drastically. I haven't done any assembly language for the Atmel microcontrollers yet - I might consider this to be a good time and place to start!

Anyone else have any use cases in which the accuracy or jitter of the millis() function is particularly important?

Well, nothing totally critical, but the accuracy of the delay() function depends on millis(). Currently a delay(1) could vary from .2 to 1.9 msec. While I don't know of any code that relies on the accuracy of delay(), it would be better to make it work as advertised.

Oh, there were some people trying to build an RTC using arduino. I would guess that they used millis(). More accurate millis() might mean less RTC drift.

Just some ideas...

Makes sense. Any thoughts on the best balance between minimizing jitter and minimizing CPU usage?

Anyone else have any use cases in which the accuracy or jitter of the millis() function is particularly important?

I'm working on a timer project where millis() robustness is important since it requires millisecond resolution. In my particular case I'm certainly happy to cede a bit more CPU for this accuracy.

I've noticed the PWM being broken on the project that I am currently working on. Thanks for the tip! (pin, LOW) it is!

Chris

I updated the timer 0 overflow and millis() with the basic strategy you suggested: updating the millis count inside interrupt handler, so it overflows nicely. I didn't change the timer 0 prescale factor because I didn't want the PWM frequency on pins 5 and 6 to differ from those on the other PWM pins. The new code is below. There's still more to do on this, so suggestions are welcome.

volatile unsigned long timer0_clock_cycles = 0;
volatile unsigned long timer0_millis = 0;

SIGNAL(SIG_OVERFLOW0)
{
      // timer 0 prescale factor is 64 and the timer overflows at 256
      timer0_clock_cycles += 64UL * 256UL;
      while (timer0_clock_cycles > clockCyclesPerMicrosecond() * 1000UL) {
            timer0_clock_cycles -= clockCyclesPerMicrosecond() * 1000UL;
            timer0_millis++;
      }
}

unsigned long millis()
{
      unsigned long m;
      uint8_t oldSREG = SREG;
      
      cli();
      m = timer0_millis;
      SREG = oldSREG;
      
      return m;
}

I didn't change the timer 0 prescale factor because I didn't want the PWM frequency on pins 5 and 6 to differ from those on the other PWM pins.

I would suggest that a better solution would be to up the frequency on the other pins - otherwise you end up with a millis that can jump by 2ms in just over 1ms. But I don't see what problem you are trying to solve by keeping the frequencies of the PWM pins the same - off the top of my head I cannot think of a project that would need those frequencies to be the same.

volatile unsigned long timer0_clock_cycles = 0;

I think you can get away with an unsigned int here - doing so not only saves two bytes of memory, but it also makes the code smaller and run faster.

I think that some motors (motor drivers?) have trouble with faster frequencies. What's the problem with having millis() sometimes increment by two?

I think that some motors (motor drivers?) have trouble with faster frequencies.

A motor driver that runs at 1KHz? That sounds a bit fast for pretty much any physical process I can think of - except sound generation. But sound generation requires the frequency to be programmable - having just the duty cycle modifiable isn't any good for sound generation.

What's the problem with having millis() sometimes increment by two?

If you try and implement a delay of 2 milliseconds, you may end up pausing for only (just over) one millisecond. An attempt to delay by 3 milliseconds may be completed in just over 2 etc. I'm pretty sure there are peripherals which would have a problem with that.