Questions on Timing

https://github.com/arduino/Arduino/blob/master/hardware/arduino/cores/arduino/wiring.c

I am trying to figure out how arduino implements the micros() function.

I understand an interrupt is called to update the current number of microseconds.

What happens when "timer0_overflow_count" exceeds its limit? Since it tracks microseconds and it is a 32 bit unsigned integer, it should overflow in approximately 72 minutes.

Does the micros() function reset? Does that affect the millis() function? My guess is that it does and its all I really want to know.

Thank you.

Catcher:
Does the micros() function reset? Does that affect the millis() function?

The value resets, but in a non-destructive manner; it simply goes back [edit: warps over [edit2:roll over?], that's the expression I was looking for] to zero and continues from there. In most calculations, you won't even feel this - unless you measure periods longer than 72 minutes, in which case you shouldn't be using micros() to begin with.

millis()'s value increments with every 1000 "ticks" of the microsecond counter; this check is not influenced by the "reset" in micros(). millis() itself "resets" after about 49 days.

See:
See Micros section also

Awesome! Thank you!

What is the advantage of this (located in source code for wiring.c)

void delay(unsigned long ms) {
      uint16_t start = (uint16_t)micros();

      while (ms > 0) {
           if (((uint16_t)micros() - start) >= 1000) {
                 ms--;
                 start += 1000;
           }
      }
}

versus this (modified by me)

void delay(unsigned long ms) {
      uint32_t start = millis();
      while(millis() - start < ms);
}

Lastly, whats the need for uint16_t, why not keep it as uint32_t ?

Catcher:
What is the advantage of this (located in source code for wiring.c)

void delay(unsigned long ms) {

uint16_t start = (uint16_t)micros();

while (ms > 0) {
          if (((uint16_t)micros() - start) >= 1000) {
                ms--;
                start += 1000;
          }
      }
}



versus this (modified by me)


void delay(unsigned long ms) {
      uint32_t start = millis();
      while(millis() - start < ms);
}




Lastly, whats the need for uint16_t, why not keep it as uint32_t ?

Your version, that uses millis(), may be less accurate. Imagine you want to do a delay(1), and the function happens to get called just a few microseconds before millis() changes. The actual delay will be a lot less than one millisecond. The original version has a better accuracy.

Given that the MCU is 8-bit, the shorter integers are (without losing information), the better and faster.