Multiple delayMicroseconds() inside interrupt function (weird behavior)

2 arduinos (UNO and Mega).
- GNDs of both are connected
- UNO's pin 2 is connected to Mega's pin 12

The Mega sends a 1000 microseconds HIGH pulse on pin 12
The UNO executes a function when that happens, (through a RISING interrupt on pin 2).

Long story short: calling 'delayMicroseconds()' more than once in that function, actually makes the delay shorter?

Here's the code that reproduce it

Arduino UNO (Receiver)

volatile unsigned long starttime = 0;
volatile unsigned long endtime = 0;
volatile bool sw = 0;

void setup()
{
  attachInterrupt(0,interrupt,RISING);
  Serial.begin(115200);
}

void loop()
{
  if(sw)
  {
    Serial.println(endtime-starttime);
    sw = 0;
  }
}


void interrupt()
{
  detachInterrupt(0);
  starttime = micros();

  delayMicroseconds(1000);
  delayMicroseconds(1000);
  delayMicroseconds(1000);
  
  endtime = micros();

  sw=1;
  EIFR = 0x01;  //clear interrupt queue
  attachInterrupt(0,interrupt,RISING);
}

Arduino Mega (Transmitter)

void setup() {
  Serial.begin(115200);
  pinMode(12,OUTPUT);
  digitalWrite(12,LOW);
}

void loop() {
  if(Serial.available() > 0)
  {
    if(Serial.read() == 49)
    {
      digitalWrite(12,HIGH);
      delayMicroseconds(1000);
      digitalWrite(12,LOW);
    }
  }
}

The value that gets printed (endtime-starttime) is less than 1000us when I call 'delayMicroseconds(1000)' more than once :roll_eyes:

Can someone clarify what actually happens here? Am I missing something? =/

Here's why:

  • While an interrupt service routine runs, other interrupts are disabled, unless the routine explicitly enables them.
  • micros() uses the running count of Timer0 overflow interrupts to calculate how many thousands of microseconds have passed.
  • While your interrupt service routine runs, Timer0 overflow interrupts aren't processed, so the count doesn't advance.

That means that the results of micros() are incorrect. Calculations using those results will yield unexpected results.

If you're sure another Interrupt 0 won't occur while interrupt() runs, you could reenable interrupts inside of it, allowing the Timer0 interrupt service routine to execute. Your delays will be a little long, since they won't account for the time spent servicing the Timer0 overflow ISR.

Alternatively, you could throw a flag in interrupt(), remove the delays from interrupt(), and execute the delays in loop(). That flag could be the value of micros() when the service routine executes; loop() could then just wait until micros() advances by 3000 from that value.

It's generally a bad idea to use any delay - or any blocking call at all - inside an interrupt service routine. Interrupt service routines need to execute quickly, to avoid interfering with other ongoing processes.

See "Hints for writing ISRs," down the page at this excellent url: Gammon Forum : Electronics : Microprocessors : Interrupts.

microsmillis() uses the running count of Timer0 overflow interrupts to calculate how many thousands of milliseconds have passed.

The short version: in general you should not use any form of delay inside an interrupt handler.

There are some exceptions, such as when you're trying to perform especially high frequency or low latency actions, but I'd consider that an advanced application and if you're up to that you probably aren't here asking why delays don't work right.

There are some exceptions, such as when you're trying to perform especially high frequency or low latency actions,

SoftwareSerial is an example where delays are needed, and used.

but I'd consider that an advanced application

Indeed.

and if you're up to that you probably aren't here asking why delays don't work right.

I think there's a nail somewhere screaming "Ow, my head!".

AWOL:

microsmillis() uses the running count of Timer0 overflow interrupts to calculate how many thousands of milliseconds have passed.

That looks like a correction, but I'm not sure what it's correcting. From wiring.c, IDE 1.5.3, as it compiles for the Uno, with declarations, compiler directives and housekeeping snipped for brevity:

unsigned long micros() {
... <snipped>
	m = timer0_overflow_count;
	t = TCNT0;
	if ((TIFR0 & _BV(TOV0)) && (t < 255))
		m++;
... <snipped>
	return ((m << 8) + t) * (64 / clockCyclesPerMicrosecond());
}

timer0_overflow_count is bumped without further processing in the Timer0 overflow ISR; it's the running count of Timer0 overflows; it's used in calculating the return value of micros(); and it keeps track of - well, not exactly thousands of microseconds, but 1024's of microseconds. micros(), rather than millis(), is the function the OP asked about. The original quote, when I wrote it, said "microseconds," rather than "millseconds," and, as far as I can tell, the original was right. Except for that part about "thousands," that is.

Is there something I'm not getting here?

It was a correction.
"micros()" does not calculate how many thousands of milliseconds have passed.
That would be like counting months to see how many hours have passed.
"millis()" counts how many thousands of microseconds have passed.

I see it now: I think that reply #1 says,

tmd3:
... how many thousands of microseconds have passed. [emphasis added.]

while the quote from reply #2 says,

AWOL:

... how many thousands of milliseconds have passed. [emphasis mine]

Idly speculating: I don't remember it, but I suspect that I typed the erroneous line and fixed it right away. I can't find an edit tag on reply #1, but I notice that one doesn't attach when I edit something immediately. That makes me suspect that AWOL sees that post by some means other than the one I use, and that the original error maintains a ghostly existence there.