Counter overflow in ISR?

I have an ISR that increments a counter value on an external interrupt. The count may get high enough to overflow a uint64 (a tick every 100us for over a year). Is it safe to test for rollover and if so increment another counter from within the ISR? Or is there another approach to do this?

Thanks!

By my calculations, a uint64 ticked 10,000 times per second would take almost 51 million years to overflow.

How did you come up with your estimate that you need to worry about this problem?

EDIT: Redoing my calculation using the accurate 264 instead of just estimating with 40000000002, it works out to over 58 million years.

Let me recheck with my collaborator – he came up with ~191 days. Wires may have been crossed. If so, sorry for the bother.

There are a number of useful constants related to the maximum and minimum values of integers that can be accessed by #include <limits.h>

See <climits> (limits.h) - C++ Reference

If you can, try to use these macros instead of just typing in 32767 or whatever in your code.

Jiggy-Ninja:
EDIT: Redoing my calculation using the accurate 264 instead of just estimating with 40000000002, it works out to over 58 million years.

https://www.google.com/search?q=(2^64)%2F10000%2F60%2F60%2F24%2F365.24

Yup.

The op is counting microseconds, so the overflow time for the various types is /1000 of what you normally get. ONe hour 11 minutes rather than 49 days for an unsigned long.

Is it safe to test for rollover and if so increment another counter from within the ISR? Or is there another approach to do this?

Normally I’d say yes. But if you are counting microseconds, an arduino runs at 16MHz, giving you only 16 clock cycles to do the math. It’s getting pretty tight.

I’d suggest that a better way to do this might be with some external electronics. Go to your electronics store (or the internet) and get some T-flip-flops (there’s probably four on a chip) and chain them together, and just take the output of the final one (RISING or FALLING). Each one halves the rate, so will reduce the tick rate down to a managable level. A chain of 8 will reduce one tick per us to one tick per 1/4ms, which is managable if you are using an ISR.

PaulMurrayCbr:
The op is counting microseconds, so the overflow time for the various types is /1000 of what you normally get. ONe hour 11 minutes rather than 49 days for an unsigned long.

Check the types again. OP is asking about uint64_t, which is not unsigned long. It's unsigned long long, which is much much bigger. It's not likely to ever overflow unless you were counting femptoseconds picoseconds (EDIT: skipped a prefix there).

Also, the ticks are every 100 us, not every 1 us. Did you even read the post?

I concur with Jiggy-Ninja - I get 58,494,241 years.

I have an ISR that increments a counter value on an external interrupt.

I should point out that doing 64-bit adds are slow on this platform. I did a test which seems to indicate it will take 3.48 µs per add.

You better not get an interrupt every microsecond, then.

a tick every 100us for over a year

Is that an average, or will there indeed be only a tick per 100 µs? You are still losing 3.5% of your total processing time on that one add. :slight_smile:

void setup ()
  {
  Serial.begin (115200);
  Serial.println ();
  unsigned long start, finish;
  start = micros ();
  volatile unsigned long long foo;
  for (byte i = 0; i < 100; i++)
    foo++;
  finish = micros ();
  Serial.print ("Time taken to do increments: ");
  Serial.println (finish - start);
  }  // end of setup

void loop ()
  {
  }  // end of loop

Results:

Time taken to do increments: 348

Following up, and thanks for all the replies!

After checking with my collaborator, we have an overflow issue, but not quite as I described. We are grabbing an interrupt and incrementing a uint64_t counter variable every 100us. That would give 58 million years as has been noted.

However the calculations are actually done on integer picoseconds. So we have to multiply the counter value by 1e8, add/subtract a value generated by external hardware in the integer range of 0 to 1e8 ps, and then stick a decimal point 12 digits from the right to yield XXXXXXXXX.YYYYYYYYYYYY seconds, which is output via USB to the host.

Because of that 1e8 multiplication, the overflow is hit at ~213 days, but that’s in the calculation loop, not the counter. So the value of the counter is indeed good for 58 million years. We should be able to deal with the 213 day limit in the calculations outside the ISR, and thus I didn’t need to ask my original question :-/ .

Some further info based on questions you’ve asked, or I suspect might ask:

  1. The interrupt is precisely every 100us; the source is a high stability frequency reference with digital divider.

  2. The measurement rate is fairly low – often once per second. The measurement loop currently takes about 1.2ms, subject to some optimization. So I’m saying that the system is capable of >500 measurements/second.

  3. Yes, the attached hardware has meaningful results at the picosecond level: ~60ps resolution, and jitter about the same. However, its maximum measurement range is only a few microseconds, so it is combined with the 100us counter to generate timestamps over a longer range.

  4. Yes, picosecond-resolution measurements over a year or more is a real thing – this system is designed to compare high-stability clocks like Cesium frequency standards where stability is measured in nanoseconds per year.

  5. The system works. We’re generating solid results now with measurements spanning days, but still working to optimize. I’m attaching a couple of plots showing the results of one set of measurements.

Thanks again for all the input! While my question may have been unnecessary, I appreciate the conversation it generated.

  1. Yes, picosecond-resolution measurements over a year or more is a real thing -- this system is designed to compare high-stability clocks like Cesium frequency standards where stability is measured in nanoseconds per year.

Yowser! That's like measuring the distance from here to Uranus with a measuring stick marked in hundredths of a millimeter.

Why not use an unsigned long and use subtraction to detect the differences - just as you would for time calculations with millis(). That will give the correct result even if there is an overflow as long as the period is less than the time to overflow. For example

void loop() {
   // other stuff
   if (period has expired) {
       countInPeriod = latestCount - previousCount;
       previousCount = latestCount;
   }
   // more stuff
}

...R

MorganS:
Yowser! That's like measuring the distance from here to Uranus with a measuring stick marked in hundredths of a millimeter.

But rather less tedious to do! And the pesky planet keeps moving about as well...

Robin2 -- thanks for the suggestion. I think that will work for one of the modes of operation. But for another we're maintaining a continuous timestamp that needs to increase monotonically, so somehow we have to roll to the next digit on the odometer when the overflow occurs. I know there are recipes to do that, and it's much easier now that I realized we can do it in the main loop rather than in the ISR.

I think the best answer is probably to split the value into separate high and low order variables and process them separately, then combine to print the result. That way we can have the full 58 million year range, just in case...

Rather than interrupting 10,000 times a second you could use the External Clock feature of Timer1. Then the hardware will count for you and you only have to add that count to your total at least every six seconds (60,000 ticks).

If you are keeping a 100 uS clock in picoseconds, doesn't your timestamp always end in 8 zeroes?

Hi John -- I'm not familiar with Timer1 or that feature; let me do some research. Thanks for the suggestion.

It's a little hard to describe what this system is doing without going into a lot of background, but basically it is timestamping an external event.

The 100us clock acts as the "coarse" part of the measurement, and the magic hardware provides the picoseconds. So when an event hits the input, we grab the current count of 100us ticks, and add the interpolated value which is in integer picoseconds. So each "counter++" in the interrupt adds another tick worth 100us. But in the main loop, that needs to be multiplied to picoseconds, so the interpolated value can be added to it with integer math. That's where the 213 day overflow comes in. The end result is formatted as seconds.picoseconds.

This would all get horrible if I was trying to measure thousands of events per second, but the inputs are usually pulse-per-second, so there's a fair bit of processing time available. Tests so far show at least 500 measurements/second are doable, and that's plenty.

Thanks for the reply, and I'll explore what Timer1 can do.