Be careful not to repeat the mistakes of the past, though. The old code kept a tick count and converted to milliseconds whenever millis() was called, and as a result it had very weird wrap-around behavior, because the math to convert to milliseconds overflowed long before the counter overflowed.
I suppose that since the code still keeps a tick count, it would be pretty easy to write an alternative to millis() that uses 64bit math (or whatever) to convert to a tuned higher-accuracy time.
unsigned long long adjusted_picoseconds_per_tick = 62500ULL*256*64; // default val. adjust from elsewhere
unsigned long long get_clock_picoseconds()
{
unsigned long long picos = timer0_overflow_count;
picos *= adjusted_picoseconds_per_tick;
return picos;
}
unsigned long high_accuracy_millis()
{
unsigned long x;
x = get_clock_picoseconds()/1000000000ULL;
return x;
}
I hear that the current 64bit math support is "big", but you wouldn't have to use it if you didn't need it.
At this level of precision, perhaps you ought to read the chip temperature and use that to index into an array of "adjusted_picosecond" values. :-;
No, you don't want to use straight picoseconds in a 64-bit integer.
2**64 picoseconds = about 213.5 days
i.e., nasty surprise after about 7 months
I suggest keeping a count of ticks, as well as a count of "overflows" (with 1 overflow = either 2**31 ticks or 2**32 ticks). Then, when the time in human units (milliseconds, seconds, whatever) is requested, you convert the "overflows" to human units, and separately convert the leftover ticks to human units, and then add the two results.
I wonder if the number of ticks per "overflow" absolutely has to be a power of two.
If it doesn't have to be a power of two, then set 1 "overflow" equal to 1 second (or just barely over 1 second). I think that this will make the conversion easier.