Why keep ISR code terse?

In a lot of interrupt tutorials/examples, it always says that ISR code must be kept brief. They advise toggling a status bit which is then read in the loop().

Does this not defeat the purpose of interrupts, as the loop() is still polling the status bits? (granted that the overhead to poll a status bit might be considerably less than polling a pin)

Consider the following:
I have a system in which a switch triggers an interrupt. When the interrupt occurs, I wish to perform rx/tx through serial. Is it safe to simply perform the serial rx/tx operations in the ISR? If not then why not?

What are some heuristics when writing ISR’s?

Bonus points for documentation for the newSoftSerial library. I have downloaded the library and the examples are too basic. General documentation would be great

Generally, interrupts are disabled during an ISR, and re-enabled when the ISR completes. Long ISR routines may cause interrupts to be missed. If the ISR is brief, the odds of every interrupt being processed go up.

An interrupt is a “Hey! Something really important just happened, better do something about it QUICK!” message.
The “something” may not be around for long, or it may happen again very soon and if you’re processing one interrupt, you may miss another, so you want to do the very bare minimum.

Remember that at 9600 baud, a single character takes over 1ms to transmit, which is an ice age to even a processor running at only 16MHz, so you really don’t want to be transmitting strings using the standard libraries in an interrupt service routines.

It’s a huge subject, potentially full of pitfalls and tricks, and variations, depending on processor - any book on embedded computing will have a useful section.

Does this not defeat the purpose of interrupts, as the loop() is still polling the status bits?

Using an interrupt all but guarantees that your switch will be recognized, even if you wait to handle it a little later in loop(). Any flag/status bit you set in the interrupt routine can stay set when the physical switch changes its state, so it is still set when loop() gets around to checking it.

Polling a switch in loop(), on the other hand, depends on the switch still being set when the loop() logic gets around to checking it. Depending on how complex your logic is, and whether it contains any delays, the switch could be different by the time your logic gets to the part of the code that checks the pin state associated with that switch.

Remember to define the flag/status bit as volatile to tell the compiler’s optimizer not to make any assumptions about it. Otherwise, the code generated by the compiler may not be effective.

Another gotcha when using interrupts and even when using short ISR routines. When the main loop code goes to read the global volatile int or other larger then one byte variable you need to consider disabling interrupts, reading the variable and then re-enabling interrupts, to make sure you get the whole variable in a ‘atomic’ manner that can not have been changed while reading a multi byte variable in the loop code.

That probably doesn’t explain the issue well, but it is what it is. :wink:


Not yet mentioned is the issue of debugging an interrupt service routine. If the code is simple (e.g. set a flag), it is trivial to debug (either the flag was set or it wasn’t). Serial is not safe to use in the context of an interrupt. Without Serial, debugging can be very tedious and difficult.

What are some heuristics when writing ISR’s?

Simple - easier to debug and easier to prove the code is free of bugs
Short - ditto
Fast - long running ISRs interfere with everything else
Infrequent - interrupts are expensive; most of the processor’s state must be saved and restored

Take it from someone who’s spent years dealing with ISRs, do yourself a big favor and keep them as simple as possible.

Thanks for all the answers.

I am currently in the process of writing a real time operating system for a school project so completely understand all of the above. I have been trying to draw parallels between my operating system and coding on the micro.

Retro-lefty, your atomic explanation was just fine, but maybe that’s because I’ve known about atomicity prior to your post.