Hi, im working on a project and for compatibility with other things i need to define a function NOW() that returns a int64_t value that contains the time since startup in nanoseconds. So basically just like millis() and micros() but a nanos(). Of course the reasion for using a 64 bit integer is so that the timer does not overflow after a few seconds, and simply multiplying the output of micros() would not be good as it overflows after roughly 4200 seconds. So i created this function:
how you plan to get nano second precision when using micros() is beyond my understanding though...
You'll need to call that function frequently enough to maintain the count (so don't miss a ~4200 seconds gap) and that means you could probably use millis() if your time unit is in thousands of seconds... if delta time is in microseconds then using micros() will be good enough
Why ?
Can you explain what the project is about ?
Could this be a XY problem ? https://xyproblem.info/
Nanoseconds does not make sense, since the execution of code takes time as well.
If you need 64-bit nanoseconds, then you need a external 64-bit 1GHz hardware timer (I don't know if they exist, I don't know how the Arduino can read the 64-bit).
When I look into my crystal ball, then I see a XY-problem which has another XY-problem in it.
The conversion from MILLIS to nanoseconds is 1,000,000. The conversion you want is MICROS to nanoseconds: 1,000. That might explain the factor of 1000 error. Check that MICROSECONDS is 1000. A better name might be MICROS_TO_NANOS.
If you want more precision, on an AVR processor you can count time at the hardware clock frequency (16 MHz). For longer intervals, you can make the Overflow counter 32-bit.
// Fast Timer
// Written by John Wasser
//
// Returns the current time in 16ths of a microsecond.
// Overflows every 268.435456 seconds.
// Note: Since this uses Timer1, Pin 9 and Pin 10 can't be used for
// analogWrite().
void StartFastTimer()
{
noInterrupts (); // protected code
// Reset Timer 1 to WGM 0, no PWM, and no clock
TCCR1A = 0;
TCCR1B = 0;
TCNT1 = 0; // Reset the counter
TIMSK1 = 0; // Turn off all Timer1 interrupts
// Clear the Timer1 Overflow Flag (yes, by writing 1 to it)
// so we don't get an immediate interrupt when we enable it.
TIFR1 = _BV(TOV1);
TCCR1B = _BV(CS10); // start Timer 1, no prescale
// Note: For longer intervals you could use a prescale of 8
// to get 8 times the duration at 1/8th the resolution (1/2
// microsecond intervals). Set '_BV(CS11)' instead.
TIMSK1 = _BV(TOIE1); // Timer 1 Overflow Interrupt Enable
interrupts ();
}
volatile uint16_t Overflows = 0;
ISR(TIMER1_OVF_vect)
{
Overflows++;
}
unsigned long FastTimer()
{
unsigned long currentTime;
uint16_t overflows;
noInterrupts();
overflows = Overflows; // Make a local copy
// If an overflow happened but has not been handled yet
// and the timer count was close to zero, count the
// overflow as part of this time.
if ((TIFR1 & _BV(TOV1)) && (TCNT1 < 1024))
overflows++;
currentTime = overflows; // Upper 16 bits
currentTime = (currentTime << 16) | TCNT1;
interrupts();
return currentTime;
}
// Demonstration of use
#if 1
void setup()
{
Serial.begin(115200);
while (!Serial);
StartFastTimer();
}
void loop()
{
static unsigned long previousTime = 0;
unsigned long currentTime = FastTimer();
Serial.println(currentTime - previousTime);
previousTime = currentTime;
delay(100);
}
#endif
Its not about the resolution or whatever, the main reason for all of this it to support stuff that uses NOW() as a time function. Which is also why using a unsigned integer would cause problems as some of them use calculations that result in negative time for timing.
As stated, im trying to support other stuff that uses NOW() for time. So i cant change anything. What is fine though is the resolution not reaching nanosecond timing just as micros() doesnt on the uno or similiar.
Im mainly using a teensy 4.0 for tesing but want to support as many platforms as possible, hence why im trying to use micros() as it will always be implemented.
As for the millis to nanoseconds conversion, my bad that was a typo, i meant micros to nanoseconds, you can see the logic behind it in the code. But even then it would not explain the 2 different versions having different outcomes.
I copied your two functions into a sketch and, so far, they are producing the same results within 20-ish microseconds.
Since you don't provide your code for "MICROSECONDS", I made up this declaration: const uint64_t MICROSECONDS = 1000ull;
Maybe your 'factor of 1000' error is in how MICROSECONDS is defined in your code.
Yep, software im trying to support was originally made for cubesats, they probably wont be up that long. Or they will have a system reset every once in a while.
Looks like NOW2() (your second example) has an overflow problem between 4292 and 4296 seconds. That is probably a uint32_t rollover at 4,294,967,296 microseconds.
The return value of micros() is already a too small integer with 32 bits unsigned, so it will overflow anyway. The code handles that overflow and should handle the overflow of a signed 32 too.
At least thats my theory. I will see in roughly half an hour.