Enabling and disabling ISRs

I'm writing a program that will need timer interrupts (which I haven't used on Arduino before).

The program can operate in several modes, each needing its own interrupt code. Because time is critical in interrupt handlers I'd rather not have code like

if (mode == foo)
    do this;
else
    do that;

but rather replace the entire interrupt handler when switching between modes. Example code i have seen uses a macro ISR() which seems to do two things:

  1. define code one might want to use as an interrupt handler.
  2. ensure that the code following ISR will be called when a particular interrupt triggers.

Is there a way to switch interrupt handlers under program control? I've seen attachInterrupt(), but it claims to be for "external" interrupts only, and I suppose timer interrupts are "internal".

Ok, now I got it. Interrupt handlers cannot be switched at runtime because the vectors are in flashmemory rather than SRAM. AttachInterrupt() is cheating by beginning the actual ISR with a call to the desired function (which will delay the interrupt by at least as much as a redundant "if (mode) ...")

I suppose the solution is to have each program mode use a separate interrupt vector or accept that redundant "if".

I'm writing a program that will need timer interrupts..

As an exercise in using timers and writing ISR's - this is the way to go.

If you're keen to explore the potential outside of the Arduino core, you could even patch the ISR flash vector table from your application. It may not be very practical, but it is possible (look at the bootloader source).

Another option would be to have a simple one-line ISR that call a vector in RAM. You would then update the RAM vector depending on application mode.

For most real-life applications however, you're typically better off with a single timer interrupt or none at all. An example of the former would be the blink-without-delay tutorial in the Playground.

A single timer interrupt will typically have an associated minimum overhead of some 50 CPU cycles. If you synchronize to a single hardware timer, you could do far better. One approach might be as follows:

  • disable timer0 interrupts (used by millis/delay/micros)
  • start the 16-bit timer1 (no interrupts)
  • synchronize your code directly against the timer1 binary counter

With this setup you have no ISR overhead whatsoever.

The faster your timer ISR runs, the more overhead you will acumulate in a given time period. So unless your timer is excpetionally slow, you're typically better off without.

Thanks for the tip on checking out the bootloader source!

This is for sampling and analyzing audio, at 14000 samples/interrupts per second. I am using timer1 to auto-trigger the start of an analog-to-digital conversion when TCNT1 == OCR1B, and then take care of the data with an ADC completed-interrupt. That way I can leave timer0 running, for access to millis(), and still have the samples taken at precise intervals.

Interrupt handlers cannot be switched at runtime because the vectors are in flashmemory rather than SRAM. AttachInterrupt() is cheating by beginning the actual ISR with a call to the desired function (which will delay the interrupt by at least as much as a redundant "if (mode) ...")

I suppose the solution is to have each program mode use a separate interrupt vector or accept that redundant "if".

I don't understand your logic here - the interrupt vector calls your ISR via a function pointer - there is no "if" overhead, otherwise, why would "detachInterrupt" exist?

I think that reading this tutorial may give ou some light about the interrupt thingies:
http://www.avrfreaks.net/index.php?name=PNphpBB2&file=viewtopic&t=55347
http://www.avrfreaks.net/index.php?name=PNphpBB2&file=viewtopic&t=37830
http://www.avrfreaks.net/index.php?name=PNphpBB2&file=viewtopic&t=76634

there is no "if" overhead

Consider a program that uses a particular interrupt for one purpose for, say, 5 minutes and then switches to another mode in which it needs the same interrupt for another purpose. In pseudocode:

ISR(whatever) {
    if (purpose == 0) {
        do the first thing;
    } else {
        do the other thing;
    }
}

If I know that purpose will have the same value for those 5 minutes it seems un-necessary to have the interrupt evaluate that if-statement with the same result over and over again. Preferably, I'd like to have:

ISR(whatever, first version) {
    do the first thing;
}

ISR(whatever, second version) {
   do the other thing;
}

and then have some way of switching between the first and second version under program control when purpose changes to avoid the "if"-overhead.
attachInterrupt superficially seems to allow such a switch, but its vector jump is only an alternative way of implementing that "if", still wasting the cycles.

A proper mode-switch would entail patching the flash vector table, as BenF suggested.

A proper mode-switch would entail patching the flash vector table

"attachInterrupt" doesn't do anything to the vector table.
It does, however, revector interrupts, via the table "intFunc".
The "if" overhead is already there in - it checks to see that there is a non-zero function pointer in "intFunc".

The source is there in your installation for you to read.

and then have some way of switching between the first and second version under program control

Perhaps using the "detachInterrupt" and "attachInterrupt" functions?

The source is there in your installation for you to read

.

I did read it of course. That's how I found out that attahchInterrupts's approach isn't going to help me shave off any cpu-cycles from the interrupt code. Putting an extra "if purpose..." in the ISR will be faster than jumping via that intFunc table.

Perhaps using the "detachInterrupt" and "attachInterrupt" functions?

I don't think this was ever about attach/detach interrupt (as they're limited to external input), but whether this scheme was worth copying for effficient handling of timer interrupts.

This is for sampling and analyzing audio, at 14000 samples/interrupts per second.

Would it not be better if you could double, triple or even quadruple that sampling rate?

ADC can also be configured for continues mode sampling (self-trigger if you like). In this mode, sampling frequency will be fixed and determined by the CPU clock / ADC prescaler (no need for the timer). You can use the prescaler to trade precision (e.g. 8 bits rather than 10) for sampling speed. In one application I have a 16MHz Arduino sending samples at a rate of 200kHz to a PC - however without using any ISR's.

You seem to have a good understanding of how this works, but I don't quite follow your priorities. You seem t be concerned with the overhead of calling a function through a pointer from within an ISR. This is probably adding 4-6 cycles. The ISR itself however adds an order of magnitude more (e.g. 40-60 cycles) just for context switching (saving/restoring registers). Why not focus on this if you're after speeding up your sampling rate or freeing up the CPU for processing?

but I  don't quite follow your priorities

Well, you're right. Those 4-6 cycles aren't worth worrying about. They aren't really a "priority", it's just that they were the only obvious inefficiency that I couldn't figure out how to get rid of so that's why I started a thread about it.

And yes, the cpu is certainly fast enough to handle more than 14000 samples/sec. The limiting factor is memory. I intend to do an FFT on the data. It uses floating point math, and so will need 4 bytes per sample even if the samples pre-FFT could fit into one byte each. FFT algorithms also use an even power-of-2 for the number of samples. So with 2kb SRAM I can spend at most 1kb for the captured data, i.e. 256 samples. At 14000Hz sample rate I could measure frequencies from 55 - 7000 Hz, a useful range.
With 4kb SRAM I could have used a 512-value FFT at 28000 samples/sec and analyzed 55 - 14000 Hz instead.

If you wnat a fast fft consider this one:
http://elm-chan.org/works/akilcd/report_e.html
http://elm-chan.org/works/rsm/report_e.html

There is an fft "engine" made in assembly using fixed point, download the source called avr-fft and take a look at it, you maybe know that the arduino adc can be run up to 200K samples per second?

download the source called avr-fft

Interesting... A quick scan of the code gives that they are using separate buffers for inputs (sample values) and outputs (volumes per frequency). I was planning to use Sörensen's algorithm (http://faculty.prairiestate.edu/skifowit/fft/sfftcf.txt) which is in-place, i.e is using the same buffer for input and output to conserve space. (It is also written in FORTRAN, but that translates rather easily to C)

But using a fixed point math instead of a floating point one gives you a lot less use of the processor and as a result a faster calculation.