Millis Accuracy Again

I've been working on my high-precision encoder-based correction for telescope mounts.

The telescope mount axis rotates once every 86164 seconds (sidereal rate). This is 15.04 arc-seconds per second.

The actual speed of the axis will not be exact due to mechanical errors (periodic error). However, because such error is periodic, over an entire worm cycle, the velocity will be constant at 15.04 arc-seconds per second.

Therefore, if you measure the position of the axis over an entire worm cycle, and compare to the calculated worm cycle, the velocity will go above and below 15.04"/second but at the end of the cycle the calculated and measured positions will match.

Here's my problem: over a worm cycle, I cannot get the measured rotational speed to match 86164 seconds. It varies with every run, e.g. 86300, 85950, 86000... about 1000ppm difference.

Now I thought this was caused by my ceramic resonators in my Uno R3 and Mega 2560 R3, but I put a Maxkit32 (which does have a crystal) and am still seeing the same level of errors.

So... why am I seeing such timing inaccuracies? (and yes the telescope axis is guaranteed to rotate accurately at 86164 seconds/rotation, it is crystal-controlled).

Is it possible for millis() to be inaccurate? I am running SPI at 2MHz, doing a lot of serial I/O, and have a couple interrupts hooked up via the Encoder library.

orly_andico:
Is it possible for millis() to be inaccurate? I am running SPI at 2MHz, doing a lot of serial I/O, and have a couple interrupts hooked up via the Encoder library.

I'm not sure about SPI and serial I/O, but every time an interrupt executes an ISR the millis() counter pauses until the ISR is complete. I wonder if for your application it wouldn't be better to do all your timing from an accurate TCXO RTC source? I was going to suggest the chronodot, but it doesn't appear to be able to output an incrementing counter that increments faster than 1Hz. It does have a 32.768KHz clock output but you would have to manually count that yourself. Possibly another chip out there?

millis() is highly inaccurate. It uses a timer to ping an interrupt once every millisecond. If another interrupt is running (Serial, your interrupts, etc) then the millis() will be delayed. If that delay is > 1ms then you will drop milliseconds.

If you require better long-period accuracy (minutes, hours, etc) an RTC is much better.

If you want something to happen at a regular interval with more precise timing, and finer grained than you can do with an RTC, then using a hardware timer to trigger an interrupt at the set period will be better than watching millis().

Using millis() is like saying an inch is two and a half centimetres - fine for a rough guess, but never really right.

86164 seconds is almost a full day.
If you use a RTC you have the accuracy of the X-tal of the RTC, independent of the Arduino.
If you use an internal hardware timer of the Arduino (for exampler TIMER1 or TIMER2), you have the accuracy of the X-tal or ceramic resonator of the Arduino.

Any other way is less accurate.

I would prefer the RTC, since the inaccuracy of the ceramic resonators is rather big.
Allthough I see one small problem. If you have many libraries and interrupts, getting the time from the RTC will be delayed now and then. Getting the time might not be consistant to the millisecond.
Since you have already SPI, you could use an RTC with SPI. That is faster than I2C.

orly_andico:
I cannot get the measured rotational speed to match 86164 seconds. It varies with every run, e.g. 86300, 85950, 86000... about 1000ppm difference.

I am running SPI at 2MHz, doing a lot of serial I/O, and have a couple interrupts hooked up via the Encoder library.

It might just be hardware clock inaccuracy, but I'd have expected that to be relatively consistent for a given device under similar conditions. You have quite a lot of interrupt related activity going on there, and it may be that you're getting interrupt overflow on the timer interrupts. You could test for that by writing a sketch that does nothing but print the value of millis() before and after the sketch has been running for a long time, and compare the output with the actual real-world elapsed time. For example if you printed the value of millis() every hour and timestamp the output at a PC, then I expect that after a day you would have a pretty good idea how accurate the Arduino clock was. (You can use RealTerm to timestamp serial input and log it to a file.) If you prove the underlying hardware is acceptably accurate then you could look for ways to avoid contention between the interrupts. I don't have any specific suggestions other than to minimise your use of interrupts, but perhaps if you posted your code somebody will spot a way to improve it. I do suggest you test the hardware accuracy first, though, to avoid wasting everyone's time.

Hmm.. One poster here has repeatedly asserted that he gets ~ 1 second/day accuracy out of a ceramic resonator Arduino.

So I assume the underlying hardware should be relatively OK. That I am also seeing this with a crystal-controlled Maxkit32 seems to support majenko's point that millis() itself is not very accurate.

I do need long-term (at least 20-30 minutes) accuracy but i also need to be able to read the clock at short intervals (at least ~20ms) which means the 1 Hz RTCs don't cut it for me; I've had a look at some of the RTC's out there, the DS1307+ seems reasonably OK (small pin count, etc.) but I would end up hooking an interrupt to the SQW which would be running at 4.096 kHz.

How is this any different from millis() ? in both cases there's some hardware event that's got an interrupt hooked up to it to update a counter.

Would I get better timekeeping (subject to the underlying ceramic resonator or crystal) if I used say the Timer1 library? I don't need 1ms resolution (hence I can live with a lower interrupt rate for the timer - which would in turn be less susceptible to getting screwed by other interrupt handlers) but I do need high accuracy.

Hence say if I read every 20ms, I expect the reading to be accurate but I don't need to read every 1ms or similar...

Alternatively.. should I just use micros() ?

It seems that a higher interrupt frequency would be less vulnerable to getting snagged by other, stray interrupts. And if the micros() ISR does get snagged, a few microseconds here and there won't matter.

I've posted 1 sec/day using micros() and crystal oscillator.

Thanks... do you get the same with millis() ?

My concern is that millis() is losing ticks due to my interrupt activities going on. (incidentally does SPI use interrupts?)

I know micros() would as well but because its interrupt rate is 1000X higher, if I decimate the right-most 10 (ish) bits I should get better long-term accuracy than millis() even if both are losing ticks.

Does this reasoning make sense?

SPI likely uses interrupts to do something with the received byte.

Change this to millis, see if the time stays true for you

unsigned long currentmicros = 0;
unsigned long nextmicros = 0;
unsigned long interval = 10000; // adjusted for my board

byte tens_hours = 0; // set Sunday 12/9/12
byte ones_hours = 0;  // seems to gain 1 second/day1/24
byte tens_minutes = 4;
byte ones_minutes = 6;
byte tens_seconds = 0;
byte ones_seconds = 0;
byte tenths = 0;
byte hundredths= 0;

byte prior_seconds = 0;

void setup()

{
  Serial.begin(57600);
  nextmicros = micros();
}

void loop()

{

  currentmicros = micros(); // read the time.

  if ((currentmicros - nextmicros) >= interval) // 10 milliseconds have gone by

  {

    hundredths = hundredths +1;

    if (hundredths == 10){
      hundredths = 0;
      tenths = tenths +1;
    }

    if (tenths == 10){
      tenths = 0;
      ones_seconds = ones_seconds +1;
    }

    if (ones_seconds == 10){
      ones_seconds = 0;
      tens_seconds = tens_seconds +1;
    }

    if (tens_seconds == 6){
      tens_seconds = 0;
      ones_minutes = ones_minutes +1;
    }

    if (ones_minutes == 10){
      ones_minutes = 0;
      tens_minutes = tens_minutes +1;
    }

    if (tens_minutes == 6){
      tens_minutes = 0;
      ones_hours = ones_hours +1;
    }

    if (ones_hours == 10){
      ones_hours = 0;
      tens_hours = tens_hours +1;
    }
    if ((tens_hours == 2) && (ones_hours == 4)){
      ones_hours = 0;
      tens_hours = 0;
      delay(1000);
    }

    nextmicros = nextmicros + interval; // update for the next comparison

  }  // end time interval check

  // counters are all updated now, send to display

  if (prior_seconds != ones_seconds){

    Serial.print (tens_hours, DEC);
    Serial.print (" ");
    Serial.print (ones_hours, DEC);
    Serial.print (" : ");
    Serial.print (tens_minutes, DEC);
    Serial.print (" ");
    Serial.print (ones_minutes, DEC);
    Serial.print (" : ");
    Serial.print (tens_seconds, DEC);
    Serial.print (" ");
    Serial.println (ones_seconds, DEC);

    prior_seconds = ones_seconds;   // show time update once/second
  }  // end one second passing check
  
  // do other stuff in the meantime ...

} // end void loop

That sample doesn't seem to handle the micros() rollover... or am I just not seeing it?

In any case.. I don't doubt that the code you've shown is accurate, but it is only doing serial output once per second. Anyway I will try re-factoring my code to use micros() and see where that goes...

There seems to be something terribly wrong with my timekeeping.

I converted the code to use micros() - everywhere in my code where I used millis() I now use micros() / 1000.

I used RealTerm to capture the serial output and timestamp it.

Over a 1408-second period (23 minutes) I have a time deviation of +/- 60 seconds.

This is with the Maxkit32 which has a crystal.

That is a huge time variation. And the Maxkit32 uses the PIC32 core timer, which is supposed to be more accurate than the ISR-based timer in the Atmega2560.

I'm sorry, I can't see your code.

How to use this forum

... and the drift follows a perfect sawtooth pattern.

Debug output from the Arduino and RealTerm is here - https://dl.dropboxusercontent.com/u/63497702/EncoderData/rawdata-20130417191500.xls

The code is quite long, but it suffices that the first column in the linked Excel is the Unix Time (from RealTerm) and the third column is millis() / 1000.

The rest of the numbers are computed in my code and not relevant to this discussion.

this is the code segment that prints the information in the Excel..

  if (is_cal()) {
    nitems++;
    Serial.print(nitems);  
    Serial.print(", ");  
    Serial.print((micros() / 1000) - get_tstart());  
    Serial.print(", ");  
    Serial.print(cur_index);  
    Serial.print(", ");  
    Serial.print(get_quadrature_count());  
    Serial.print(", ");  
    Serial.print(encoderAngle);
    Serial.print(", ");  
    Serial.print(get_theoretical_angle());
    Serial.print(", ");  
    Serial.print(track_err);
    Serial.print(", ");  
    Serial.print(pulse);
    Serial.print("\r\n");   
  }

I am thinking that my processing is masking the ISR for millis() / micros() ?

what do I do in my main loop (which only executes every 250ms or so)

  1. read an MCP3304 SPI ADC 128X at 2MHz SPI clock, this should take 1.3 ms
  2. calculate two arctangents, this should take 0.8 ms on an Atmega2560

Is it possible that the above routines are masking the timer ISR?

orly_andico:
The rest of the numbers are computed in my code and not relevant to this discussion.

Again, I suggest you run the same test with the extraneous code removed and see whether the problem is inherent in your Arduino, or caused by something you're doing. And then post the actual code together with the output which demonstrates the problem.

That sample doesn't seem to handle the micros() rollover... or am I just not seeing it?

You are not seeing it.When micros() goes from FFFFFFFF to 00000000 and beyond, the math of
00000010 - FFFFFF10 = 00000100
works out correctly because the digits above the 32nd bit are dropped.
micros() rolls over about every 72 minutes I think, I've let this run 24 hours at a time tracking against the official US time, and seen very low drift on 16 MHz crystal equipped Duemilanove.

In any case.. I don't doubt that the code you've shown is accurate, but it is only doing serial output once per second.

Feel free to have it print out faster. Change these 2 lines for hundredths or tenths.

if (prior_seconds != ones_seconds){
prior_seconds = ones_seconds; // show time update once/second

For someone who is interested, I made a sketch to demonstrate the arithmatic rollover yesterday.
Try the second sketch at the bottom, Arduino Playground - TimingRollover

Reduced the ADC activity to 32 reads per 1 second (instead of the previous 128 reads per 0.25 seconds). This of course also reduced the arc-tangent calculations from 1 per 250ms, to 1 per second.

Now over a 4200-second period, millis() has lost 10 seconds. This is much less than before.

It does seem that SPI activity and/or the arc-tangent calculation causes millis() and micros() to lose ticks.

orly_andico:
I converted the code to use micros() - everywhere in my code where I used millis() I now use micros() / 1000.

micros() returns an unsigned long so try micros() / 1000UL.

Other technical, a resonator is affected by temperature more than a crystal.

You are using floats aren't you? Arduino floats and doubles are both 4 bytes and s-l-o-o-o-w-w.
If you can, just pass raw data (much less of that, hey?) post process on a PC. Results will be more accurate.

Lastly, search on the (free) Processing language for PC. Arduino is an example of the Wiring side of Processing and Wiring, the full set.