Interrupt heresy - reactions?

Standard advice on ISRs from the forums often includes one or more of the following:

  • avoid putting too much inside an ISR - keep them lean and mean
  • simply set a flag of some kind in the ISR, then poll that flag in the main loop and do the bulk of any processing required there
  • dont use function calls in the ISR
  • you cant use anything that uses interrupts inside the ISR, including importantly, timing functons like millis(), and
  • comms using the Serial class

So why is this advice being offered, and does it always apply?

The argument seems to run t like this. Some sources of interrupts are random in the sense that there is no telling when or how often they will occur. There is therefore no guarantee that the ISR will have finished servicing one interrupt before the next interrupt of the same type occurs. To handle fully this situation of 'overlapping' interrupts within the ISR itself, the ISR code would have to be re-entrant. Since ISRs are tricky to code/debug at the best of times, and writing safe re-entrant code is even more difficult, so keep things as simple as possible by disabling interrupts inside ISRs.

Occasionally an incoming interrupt that follows close on the heels of a similar interrupt will be missed while you are busy processing the first interrupt, but in many cases this is acceptable. This is presumably the reason that the AVR processor has been designed to automatically disable further interrupts just before calling an ISR.

As a result, on the Arduino, unless you take counter-measures, all interrupts arriving during the processing of an ISR will supposedly be lost (unless they persist in some way until after the ISR has finished processing).

In that light, the standard advice starts looking very wise:
keep ISRs short (or you will miss even more interrupts)
don't use functions (they may contain code that uses interrupts, and this will fail)
dont use millis, or Serial in an ISR since these use interrupts

But notice the large price you are paying for adopting this so-called 'safe' approach
occasional lost interrupts (lost accuracy?)
occasional loss of characters during serial communications (now you may have to protect more thoroughly against
dropped characters, eg more complex protocol, slower comms)
inability to use standard timing mechanisms inside ISRs (may mean resorting to custom built timinig mechanisms)
inability to use standard comms inside ISRs (makes debugging ISRs much more difficult, may have to resort to flashing LEDs etc)
inaccuracy of timing mechanisms outside ISRs (since the millis() will not be counting while the ISR is running, and
will therefore always be running slow, by an undetermined amount)
loss of structure (avoiding function calls), and/or redundant coding (replacing function calls) for fear of using
function calls inside the ISR

In some cases, you may nevertheless conclude that this is the right solution on balance.

BUT ...

In many situations, the nature of the interrupts involved is such that they are guaranteed to arrive serially (one at a time), and sufficiently spaced out so that concurrent invocation of the ISRs is never an issue.

Take the situation where there are only two types of interrupts: interrupts from a timer (A), and interrupts from a serial interface (B). You may get an A and a B arriving at the same time, but never two As or two Bs at the same time. Since the interrupts from any one source are guaranteed to arrive serially, you don't require re-entrant code in the ISR for either type of interrupt. Consequently you dont require the 'simplification' of switching off further interrupts in the ISR.

You can't stop the AVR chip from switching off interrupts before calling an ISR, but you CAN switch them back on again as soon as you're in the ISR (simply make sei(); the first instruction of your ISR.)

If this situation applies to you (and I believe it to be quite common) you're life as an ISR writer will be greatly enhanced:

  • you'll very rarely miss an interrupt (the time when interrupts are disabled will be extremely small), and so you either don't have to cope with that possibility, (either in terms of extra code, or lost precision) or the impacts of occasional missed interrupts are greatly reduced
  • you can start using normal timing mechanisms like millis() for timing inside your ISR
  • your millis timer will not run slow
  • comms using Serial outside the ISR will be unaffected (since the ISR involved with Serial can now interrupt your custom ISR)
  • you can use Serial comms within the ISR, whicjh is especially valuable for debugging
  • you can safely use most function calls within the ISR without worrying about any embedded use of interrupts
  • you dont have to worry unduly about the size of the ISR - since you're no longer under the same time pressure, and you're not missing interrupts

In fact you can see that it is the very act of inhibiting further interrupts that is making the writing of ISRs seem difficult - by keeping interrupts enabled, most of the problems evaporate.

However it all hinges on the nature of the interrupts you are trying to handle:
with random interrupts there is a real issue to solve: you can either accept the processor default (further interrupts inhibited) or you can write re-entrant code (tricky);
with serial, spaced out interrupts, there is no big issue - why not re-enable interrupts at the start of the ISR and then just programme normally.

I know it's heresy, and I'm expecting a fatwa, but I'm old and ugly - bring it on.

You have a few factual errors there. First for reference:

  • dont use function calls in the ISR
  • you cant use anything that uses interrupts inside the ISR, including importantly, timing functons like millis()

You can call functions. Why not? You need to be cautious that those functions work properly with interrupts disabled. A lot of the core functions do exactly that.

You can use millis(). It just doesn't increment. You can use micros(). Since that just interrogates the hardware timer it will increment unless you spend so long in the ISR (like, over a millisecond) that the overflow isn't caught.

occasional lost interrupts (lost accuracy?)

No. Each interrupt source sets a flag in the processor (eg. an external interrupt). That flag is tested next time interrupts are enabled, in interrupt priority order. If your statement was correct the processor would be extremely flaky, which it isn't.

Some interrupts (eg. an external LOW level interrupt) do not set such a flag. But these are primarily designed to wake the processor from sleep.

If your statement was true then timer interrupts, or serial interrupts would be constantly being lost if a timer happened to fire when serial data arrived. This simply doesn't happen.

occasional loss of characters during serial communications

No. For the reason given above for one thing, the interrupt sets a flag which is tested. Also the serial UART has an input buffer of a couple of bytes. You can afford to take a (reasonable) amount of time before grabbing the data.

inability to use standard timing mechanisms inside ISRs

No. You can use micros() which continues to be accurate because it just grabs a register from the hardware timer.

inability to use standard comms inside ISRs (makes debugging ISRs much more difficult, may have to resort to flashing LEDs etc)

You shouldn't be using comms inside an ISR anyway. Doing serial prints inside an ISR is going to throw the timing out so much that you are not even debugging what would happen without the debug prints. Flashing LEDs are perfectly acceptable. You can also send debugging out via SPI which is fast, and doesn't use interrupts to send, as described here: http://www.gammon.com.au/forum/?id=11329

inaccuracy of timing mechanisms outside ISRs (since the millis() will not be counting while the ISR is running, and will therefore always be running slow, by an undetermined amount)

No. Where do you get this stuff from? The millis() result is based on a hardware timer that is running whether or not you are in an interrupt. The only thing that isn't handled is the overflow (every 1.024 mS) which is why you should keep ISRs short. This interrupt will be "remembered" as described above and handled correctly when any existing ISR finishes.

loss of structure (avoiding function calls), and/or redundant coding (replacing function calls) for fear of using function calls inside the ISR

You can call functions from an ISR, as I said before.

You can't stop the AVR chip from switching off interrupts before calling an ISR, but you CAN switch them back on again as soon as you're in the ISR (simply make sei(); the first instruction of your ISR.)

I strongly recommend against doing that. You can get into re-entrant ISR calls which will trash your registers, variables, etc. or simply send you into a loop.

In fact you can see that it is the very act of inhibiting further interrupts that is making the writing of ISRs seem difficult - by keeping interrupts enabled, most of the problems evaporate.

The processor disables interrupts for a very good reason, when entering an ISR. Your fears about what happens are unfounded, and I recommend that you not re-enable interrupts as you suggest.

I get why someone might say don't call functions, but it seems a little extreme. I would say: Know the functions being called.

Regarding

There is therefore no guarantee that the ISR will have finished servicing one interrupt before the next interrupt of the same type occurs.

That is exactly the job of the programmer and system designer, to guarantee such things. If they can't be guaranteed, then perhaps a different solution is warranted.

Regarding nested interrupts, that is for sure a maze of twisty little passages, there is a non-trivial price to be paid. Maybe it can't be avoided for a given application. But it's the 80/20 rule (or maybe 95/5 etc.), it's not needed for many real-world applications. Why go there if there is a simpler solution. Simple is good -- the point is not to design in as much complexity as possible. I'm sure it's a great intellectual exercise just the same. We'll leave that exercise to the reader :wink:

There is therefore no guarantee that the ISR will have finished servicing one interrupt before the next interrupt of the same type occurs.

If you have a lot of such interrupts, and you enable interrupts, you will have a runaway stack overflow very quickly. However one additional one can be handled, because of the flag being set as I described earlier.

BTW - I didn't move this thread from Programming Questions. I was happy for it to be there.

Flashing LEDs are perfectly acceptable

The time intervals are so short that using a logic analyzer might be advisable rather than trying to spot an LED flash.

kenny_devon:
In fact you can see that it is the very act of inhibiting further interrupts that is making the writing of ISRs seem difficult - by keeping interrupts enabled, most of the problems evaporate.

I'm curious to know what this opinion is based on. Have you done any interrupt-based programming? It's not obvious from your post.

I did move the question here - it didn't seem to me to a specific programming question, rather a ramble through interrupts and timing.

PeterH:

kenny_devon:
In fact you can see that it is the very act of inhibiting further interrupts that is making the writing of ISRs seem difficult - by keeping interrupts enabled, most of the problems evaporate.

I'm curious to know what this opinion is based on. Have you done any interrupt-based programming? It's not obvious from your post.

I agree. One might ask why the Atmel AVR folks designed the hardware to automatically disable interrupts upon first entering a ISR routine as the default behavior, if it was almost always the best practice to do otherwise? Certainly one can re-enable interrupts while inside a ISR to allow nested ISRs to function but I would think that would be a very edge case at best, where that is indeed the best method to handle a specific requirement?

Lefty

One might ask why the Atmel AVR folks designed the hardware to automatically disable interrupts upon first entering a ISR routine as the default behavior, if it was almost always the best practice to do otherwise?

OTOH ARMs do allow nested interrupts and IIRC it's up to you to disable them in each ISR if needs be, so I don't think there's a fundamental law of nature that states they should be disabled.

I agree though that it's a can of worms you probably don't want to open unless very experienced.


Rob

You can re-enable interrupts in specialized circumstances, but you would need to be pretty damn certain that you won't get into too many recursive calls. And if you are certain that won't happen it probably isn't necessary in the first place. Most of the problems with having interrupts disabled did not have the effects the OP thought they did.

The main reason I could get from the original post was that he wanted to be able to do serial comms for debugging, which is the one thing which would slow down the ISR so much that you might indeed get runaway recursion. So really, redesign is needed. Doing a sei() inside the ISR is not going to be the golden bullet that solves everything.