Interrupts should be as short as possible, do their job and exit. A few microseconds normally suffices, so there is no need to "keep track of time" in the interrupt.
Yes, they absolutely do work inside interrupts. What doesn't work is when the timer behind those functions rolls over and calls its own interrupt.
Your interrupt should only last a few microseconds. Sometimes you can get away with milliseconds but that's an advanced topic (on top of the advanced topic of interrupts.)
What also doesn't work inside interrupts is analogRead(). The documentation doesn't make it clear but analog uses interrupts, which are normally blocked when you're already inside an interrupt. You can turn them back on, but then you need to make your interrupt non re-entrant.
The current version of analogRead() does not use interrupts, There is no use nor need for interrupts, when a function doesn't return before the read is complete.
Also what's the use of a time stamp, when the values are sampled and recorded at a known rate? A time stamp IMO makes sense when taken from a RTC, else a simple sequence number were sufficient (and redundant).
What also doesn't work inside interrupts is analogRead(). The documentation doesn't make it clear but analog uses interrupts, which are normally blocked when you're already inside an interrupt.
This is not true.
analogRead uses a busy-wait, and takes around 100us. For this reason, it should not be used in an interrupt.
The lower 2 bytes bits of micros() will change in real time, they are read from T0. The higher bytes will not reflect an possible overflow, when read inside an ISR, as MorganS already pointed out.
EDIT: Byte boundaries do not apply, the timer bits are shifted according to the clock frequency.
A single pending timer overflow is taken into account.
And millis() is not updated inside an ISR for the same reason, it will always return the entry time of the ISR.
You can't be objecting to the technical content of my post. I described precisely the circumstances in which micros can be used in an interrupt service routine.
As far as I can tell... Are you seriously objecting to my word choice?
DrDiettrich:
The lower 2 bytes of micros() will change in real time, they are read from T0. The higher bytes will not reflect an possible overflow, when read inside an ISR, as MorganS already pointed out. And millis() is not updated inside an ISR for the same reason, it will always return the entry time of the ISR.
Does this mean that overflow from the lower two bytes of micros() is lost, if it occurs while inside an ISR?
[quote author=Coding Badly date=1468384310 link=msg=2838906]
You can't be objecting to the technical content of my post. I described precisely the circumstances in which micros can be used in an interrupt service routine.
As far as I can tell... Are you seriously objecting to my word choice?[/quote]
Sorry, your word choice confused me, as I'm not a native English speaker
The technical content of your post is correct.
Just to clarify, before confusion increases:
A single pending timer overflow is reflected properly in micros(), even if called from an ISR (with interrupts disabled).
Also the overflow is not lost, it will be handled as soon as interrupts are enabled again.
When an ISR takes so long that further timer overflows occur, with interrupts still disabled, the reported time becomes unreliable.
The time between two timer overflows depends on the CPU architecture and clock frequency, typically ~1ms. But an ISR should never run so long, or it will affect all system timing and other interrupt handlers.
I wanted to point out a technical difference between micros() and millis(), when used in an ISR. In this case millis() will return the same value over and over again, while micros() will advance until it wraps back on every lost timer overflow. Thus micros() can be used to implement or measure short delays even inside an ISR, but millis() can't be used for such purposes.
DrDiettrich:
When an ISR takes so long that further timer overflows occur, with interrupts still disabled, the reported time becomes unreliable.
The time between two timer overflows depends on the CPU architecture and clock frequency, typically ~1ms. But an ISR should never run so long, or it will affect all system timing and other interrupt handlers.
I wanted to point out a technical difference between micros() and millis(), when used in an ISR. In this case millis() will return the same value over and over again, while micros() will advance until it wraps back on every lost timer overflow. Thus micros() can be used to implement or measure short delays even inside an ISR, but millis() can't be used for such purposes.
Thanks. But how do you figure out that ~1ms between each time the lower two bytes of the microsecond counter overflows? Surely it overflows in 2^16 = 65535 ticks, which at 16 MHz becomes 4 millis?
From wiring.c:
// the prescaler is set so that timer0 ticks every 64 clock cycles, and the
// the overflow handler is called every 256 ticks.
This would mean a timer frequency of 16MHz/64=1/4MHz=250kHz, and an overflow frequency of 250kHz/256, what's about 1kHz or exactly 1.024ms.
Please note that not all bits of T0 are used, so that the timing also works on a controller with a 1-byte timer.
A different clock frequency is taken into account in micros():