Strange problem with ISR

Hi all,

I've got a small program that I'm running on a MEGA2560. It's simply an ISR which reads from PROGMEM at intervals and sends the data to an output (PORTK).

The sound sample is about 235K bytes, so what I do is first load the program itself, then load the sound data as a separate .HEX file with an offset of 0x1000.

It works fine, but once in a while (maybe once out of 2 or 3 plays) the sound stops in the middle and restarts at the beginning (acts like RESET was pressed).

I'm sure the code is right... my only guess is that there's something "non-atomic" happening that I am not accounting for. Please take a look at the code and see if you can find my mistake (if any). It's small enough that it should be easy to go through. Thanks!!!

#include <inttypes.h>
#include <avr/interrupt.h>
#include <avr/pgmspace.h>

#define SAMPLERATE 8000

volatile uint8_t busy;
volatile uint32_t timer;
volatile uint32_t bytes;
volatile uint32_t addr;

ISR (TIMER1_COMPA_vect) // timer 1 (16 bit)
{
    if (timer) { // if there is a count
        timer--; // decrement it
    } else { // else
        busy = 0; // flag not busy anymore
    }

    if (bytes) { // if bytes to play
        PORTK = pgm_read_byte_far (addr++); // push byte to DAC
        bytes--; // decrement count
    }
}

void delayMsec (uint32_t msec)
{
    cli();  // interrupts off
    busy = 1; // flag ISR busy
    timer = (msec * (SAMPLERATE / 1000)); // count for 1 millisecond (atomic write)
    sei(); // interrupts on
    while (busy); // wait while busy
}

void initISR (void)
{
    cli(); // interrupts off
    // set timer 1 to CTC mode
    TCCR1A = 0x00; // mode 4 (CTC) OCR1A = top
    TCCR1B = (_BV(WGM12) | _BV(CS10)); // set prescale
    OCR1A = (F_CPU / SAMPLERATE); // audio sample rate
    TIMSK1 = _BV(OCIE1A); // start timer
    sei(); // interrupts on
}

int main (void)
{
    initISR (); // setup the ISR
    DDRK = 0xFF; // all PORTK bits = OUTPUT

    while (1) {
        cli();  // interrupts off
        addr = 0x1000; // set start address of audio
        bytes = 234893; // audio byte count
        sei(); // interrupts on
        while (bytes); // wait while playing audio
        delayMsec (3000); // 3 seconds between plays
    }
}

Well, bytes is 4 byte variable so there will certainly be times when it's getting changed as while(bytes) is checking for it to be 0.

For example, say bytes = 0x00010000 when while(bytes) starts checking for it to == 0.

Let's say it's checked the two LSBs which are 0x00, so it's getting ready to check the two MSBs.

Then, an interrupt happens and bytes-- changes the value to bytes = 0x0000FFFF.

Now, we get back to the while(bytes) loop where it checks the upper two MSBs...which are now 0x00!

BigBobby:
Well, bytes is 4 byte variable so there will certainly be times when it's getting changed as while(bytes) is checking for it to be 0.

For example, say bytes = 0x00010000 when while(bytes) starts checking for it to == 0.

Let's say it's checked the two LSBs which are 0x00, so it's getting ready to check the two MSBs.

Then, an interrupt happens and bytes-- changes the value to bytes = 0x0000FFFF.

Now, we get back to the while(bytes) loop where it checks the upper two MSBs...which are now 0x00!

That situation is addressed here:-
Critical sections ... accessing volatile variables

BigBobby:
Well, bytes is 4 byte variable so there will certainly be times when it's getting changed as while(bytes) is checking for it to be 0.

For example, say bytes = 0x00010000 when while(bytes) starts checking for it to == 0.

Let's say it's checked the two LSBs which are 0x00, so it's getting ready to check the two MSBs.

Then, an interrupt happens and bytes-- changes the value to bytes = 0x0000FFFF.

Now, we get back to the while(bytes) loop where it checks the upper two MSBs...which are now 0x00!

OMG!!! You're right. I specifically use "busy" (8 bits) for the timer, but then I check "bytes" (32 bits) directly!

Guess I needed a fresh set of eyes on this one. Thanks!

BigBobby:
Well, bytes is 4 byte variable so there will certainly be times when it's getting changed as while(bytes) is checking for it to be 0.

Just added another 8 bit variable ("playing") to the ISR and the ISR sets it to 0 when "bytes" becomes 0. Works like a charm... no glitches. Thanks again!

OldSteve:
That situation is addressed here:-
Critical sections ... accessing volatile variables

Thanks, that seems to be an excellent resource about these issues.

Personally I wouldn't solve the problem like he did, however, turning interrupts on and off each time bytes is read. I'd do something like this instead:

uint32_t bytes_copy;
do {
 do {
    bytes_copy = bytes;
    } while(bytes_copy != bytes);
} while(bytes_copy);

Krupski:
Just added another 8 bit variable ("playing") to the ISR and the ISR sets it to 0 when "bytes" becomes 0. Works like a charm... no glitches. Thanks again!

I see...that's a good solution too :slight_smile:

BigBobby:
I see...that's a good solution too :slight_smile:

Yeah... I know that I can turn off interrupts, check a 16 or 32 bit variable (atomically), then re-enable interrupts. But using "busy" flags and keeping them 8 bit seems (to me) to be cleaner. Now, all I have to do is wait "while (busy);".

I have a more complicated one for an FM radio receiver chip that has FOUR different timers in one ISR and I find it easier to just wait while busy than do the whole interrupt and tmp variable thing.

Anyway, thanks again for the help. I knew it had to be simple. I purposely used an 8 bit var for "busy", then (DOH!!) I checked directly a 32 bit variable for zero. Frankly, I'm amazed it didn't glitch more than it did.

Krupski:
Yeah... I know that I can turn off interrupts, check a 16 or 32 bit variable (atomically), then re-enable interrupts. But using "busy" flags and keeping them 8 bit seems (to me) to be cleaner. Now, all I have to do is wait "while (busy);".

I have a more complicated one for an FM radio receiver chip that has FOUR different timers in one ISR and I find it easier to just wait while busy than do the whole interrupt and tmp variable thing.

Anyway, thanks again for the help. I knew it had to be simple. I purposely used an 8 bit var for "busy", then (DOH!!) I checked directly a 32 bit variable for zero. Frankly, I'm amazed it didn't glitch more than it did.

It's a FAR less efficient method, however. It takes considerably more code, and considerably more time. And for what benefit? Enabling and disabling interrupts are a single instruction each. VERY efficient, and the impact on any other code, including interrupt handlers, is virtually zero.

Regards,
Ray L.

RayLivingston:
It's a FAR less efficient method, however. It takes considerably more code, and considerably more time. And for what benefit? Enabling and disabling interrupts are a single instruction each. VERY efficient, and the impact on any other code, including interrupt handlers, is virtually zero.

I'm one of those people that avoids turning interrupts on and off in embedded code (as evidenced by my do-while solution above).

My first products had a PLL in software, however, that used an edge triggered interrupt. The PLL was difficult enough to analyze and implement without worrying about variations in the interrupt latency due to the interrupts being turned on/off.

Granted most systems are tolerant of variable interrupt latency, but having previously convinced myself that there's always ways around turning them on/off, I never got out of the habit of using those other ways. Yes it might cost you a few bytes of ROM or a byte of RAM (not quite fair to call that FAR less efficient), but in exchange you get a system where the interrupt latency is entirely determined by the datasheet (no searching through the code to determine the worst case delay).

BigBobby:
I'm one of those people that avoids turning interrupts on and off in embedded code (as evidenced by my do-while solution above).

My first products had a PLL in software, however, that used an edge triggered interrupt. The PLL was difficult enough to analyze and implement without worrying about variations in the interrupt latency due to the interrupts being turned on/off.

Granted most systems are tolerant of variable interrupt latency, but having previously convinced myself that there's always ways around turning them on/off, I never got out of the habit of using those other ways. Yes it might cost you a few bytes of ROM or a byte of RAM (not quite fair to call that FAR less efficient), but in exchange you get a system where the interrupt latency is entirely determined by the datasheet (no searching through the code to determine the worst case delay).

The only time you DON'T have variable interrupt latency, is when you have only a single interrupt source, or all the interrupts are synchronized and can never over-lap. Both of those cases are very real in most real-world applications. Otherwise, you cannot possibly control it, you just have to deal with it.

Regards,
Ray L.

RayLivingston:
The only time you DON'T have variable interrupt latency, is when you have only a single interrupt source, or all the interrupts are synchronized and can never over-lap. Both of those cases are very real in most real-world applications. Otherwise, you cannot possibly control it, you just have to deal with it.

Not when you have prioritized interrupts.

I honestly haven't read the interrupt section of the AVR's datasheet yet, but I'd expect it to have prioritized interrupts. Hopefully the watchdog is the highest priority, and pin interrupts after that.

You're telling me it's impossible for the AVR to interrupt an interrupt?

author=BigBobby link=msg=2701247 date=1460075295

You're telling me it's impossible for the AVR to interrupt an interrupt?

Interrupts are disabled while an interrupt is serviced. The priority only dictates which interrupt will be serviced when the current one is completed.

BigBobby:
Not when you have prioritized interrupts.

I honestly haven't read the interrupt section of the AVR's datasheet yet, but I'd expect it to have prioritized interrupts. Hopefully the watchdog is the highest priority, and pin interrupts after that.

You're telling me it's impossible for the AVR to interrupt an interrupt?

More advanced microcontrollers, like ARM, do support multiple priorities, and nested interrupts. But, even then, there will generally be multiple devices at several priority levels. Designing a system such that it cannot tolerate a reasonable amount of interrupt latency is asking for trouble, and creating a very "brittle" system. There are applications where that is necessary, but they are few and far between. Making that a requirement when there are better ways to do it is just foolish. I've been building embedded systems for close to 40 years, and have never once designed one that would fail if interrupt response was delayed by a few instructions.

Regards,
Ray L.

BigBobby:
You're telling me it's impossible for the AVR to interrupt an interrupt?

An ISR can turn interrupts back on. It's not advised. However by the time you get to the code that does that around a microsecond will have elapsed.

There is no priority of interrupts as such, except that when considering which interrupt to process next (when interrupts are enabled) the processor works its way down a list in a known order.

In this case (the original question) doing a SEI and CLI (which take one clock cycle each) will be much faster than making a copy, and then keeping on comparing it.

If you happen to be in a very timing-specific situation (for example as I was when generating VGA output) then you may need to make sure that interrupts happen when you want them to. This may well mean disabling the Timer 0 interrupt that is normally used for millis and micros functions.

... but in exchange you get a system where the interrupt latency is entirely determined by the datasheet ...

Well, everything is determined by the datasheet, right? But not turning interrupts off doesn't necessarily guarantee deterministic behaviour. If you have (say) a Timer 0 interrupt (which is used by the default library to count Timer 0 overflows) then interrupts can be off, due to the fact that you are inside an interrupt when another interrupt (which you happen to want to service quickly) occurs. So, using CLI and SEI doesn't put you in a much worse situation. Better, if anything, because copying a few bytes will be quicker than the time it takes to service the Timer 0 overflow.

I see. Most ISRs are so short it's probably not worth the time enabling the other ISRs, especially since there's no simple way to only enable the ones that are higher priority. Is it not advised because of specific issues (e.g. do you have to clear your own IF first else you'll just interrupt yourself?), or is it just considered complicated without much benefit?

[quote author=Nick Gammon date=1460092888 link=msg=2701420]Well, everything is determined by the datasheet, right? But not turning interrupts off doesn't necessarily guarantee deterministic behaviour. If you have (say) a Timer 0 interrupt (which is used by the default library to count Timer 0 overflows) then interrupts can be off, due to the fact that you are inside an interrupt when another interrupt (which you happen to want to service quickly) occurs. So, using CLI and SEI doesn't put you in a much worse situation. Better, if anything, because copying a few bytes will be quicker than the time it takes to service the Timer 0 overflow.
[/quote]

When calculating interrupt latency, it pretty much comes down to these components:

max_isr_latency = time_to_ISR_as specified_by_datasheet + // <- involves cycles to save stuff, possibly jump to a table to jump again, etc.
                  time_compiler_might_add_to_ISR + // <- technically in the ISR, but not to your handling code.
                  max(time_of_your_longest_interrupt, // <- this is one you don't have with a micro that uses prioritized interrupts.
                  time_of_your_longest_code_with_interrupts_disabled); // <- this is one you don't have if you don't do it.

There are plenty of 8 bit micros that have prioritized interrupts. Whether it's just HIGH and LOW like the PIC18 or multi-level like the 8051 (when I started it was the most common microcontroller in the world), having this capability eliminates term #3.

If you eliminate term #4 through programming, then you are only left with terms #1 and #2 which are constants. If this latency is part of a Matlab model of the control system, then this is very useful. Not only is a constant easy to model, but I also don't have to spend any minutes going through the code figuring out how much I added to the constant terms.

And how much do we really add to the code? I compared these two test functions:

void testfunction1(void)
{
  uint32_t bytes_copy;
  do{
    cli();
    bytes_copy = bytes;
    sei();
  } while(bytes_copy);
}

void testfunction2(void)
{
  uint32_t bytes_copy;
  do{
    do{
      bytes_copy = bytes;
    }while(bytes_copy != bytes);
  } while(bytes_copy);
}

These are the instructions they compile into:

00000186 <_Z13testfunction1v>:
 186:	f8 94       	cli
 188:	80 91 10 01 	lds	r24, 0x0110
 18c:	90 91 11 01 	lds	r25, 0x0111
 190:	a0 91 12 01 	lds	r26, 0x0112
 194:	b0 91 13 01 	lds	r27, 0x0113
 198:	78 94       	sei
 19a:	89 2b       	or	r24, r25
 19c:	8a 2b       	or	r24, r26
 19e:	8b 2b       	or	r24, r27
 1a0:	91 f7       	brne	.-28     	; 0x186 <_Z13testfunction1v>
 1a2:	08 95       	ret

000001a4 <_Z13testfunction2v>:
 1a4:	80 91 10 01 	lds	r24, 0x0110
 1a8:	90 91 11 01 	lds	r25, 0x0111
 1ac:	a0 91 12 01 	lds	r26, 0x0112
 1b0:	b0 91 13 01 	lds	r27, 0x0113
 1b4:	40 91 10 01 	lds	r20, 0x0110
 1b8:	50 91 11 01 	lds	r21, 0x0111
 1bc:	60 91 12 01 	lds	r22, 0x0112
 1c0:	70 91 13 01 	lds	r23, 0x0113
 1c4:	84 17       	cp	r24, r20
 1c6:	95 07       	cpc	r25, r21
 1c8:	a6 07       	cpc	r26, r22
 1ca:	b7 07       	cpc	r27, r23
 1cc:	59 f7       	brne	.-42     	; 0x1a4 <_Z13testfunction2v>
 1ce:	89 2b       	or	r24, r25
 1d0:	8a 2b       	or	r24, r26
 1d2:	8b 2b       	or	r24, r27
 1d4:	39 f7       	brne	.-50     	; 0x1a4 <_Z13testfunction2v>
 1d6:	08 95       	ret

It's only 7 instructions longer...and since it's only protecting a variable that's decremented, I could save 3 of those instructions. That's not a high price for making the 4th term go away.

I agree that saving the 4th term isn't that useful if you're already stuck with the 3rd term, but it doesn't really hurt either.

Wait a minute.

Did I just use main() in an .ino file?

I had no idea until now, that I'm not forced to use setup() and loop()!

:slight_smile:

Yes, you can do that. If you choose to, you may want to call init() which sets up the timers, particularly Timer 0 for millis, micros, etc. OTOH that might be the very thing you are trying to avoid it doing. :slight_smile:


Is it not advised because of specific issues (e.g. do you have to clear your own IF first else you'll just interrupt yourself?), or is it just considered complicated without much benefit?

You may interrupt yourself, in which case you have re-entrancy issues. Even if you don't, you may call something (eg. millis) which is itself not re-entrant. My general advice when people think they have the need to do it is to redesign. I am pretty certain the corresponding interrupt flag is cleared on entry to the ISR, not upon leaving. For example, in 18.11.7 in the datasheet:

OCF2B is cleared by hardware when executing the corresponding interrupt handling vector.

I take that to mean "upon entering the interrupt" because an interrupt causes the program counter to be replaced by the corresponding interrupt vector address, which is then executed. This is almost always a jump to the ISR in your code (hence you already have the expense of one jump instruction). Thus I take the datasheet to mean: "At the same time as changing the program counter to contain the interrupt vector, the corresponding interrupt flag is cleared".

Therefore an interrupt (eg. an external interrupt) could interrupt the same ISR that was handling a previous one (if you enabled interrupts).


As for your calculation of latency, the amount of code the compiler adds to the ISR prologue is not fixed. Depending on what registers it uses (which the compiler knows) it has to push those at the start and pull them at the end, and no others.

Note that the AVR processor automatically disables interrupts upon entering an ISR, and enables them when executing a RTI, so you don't have to do that explicitly, unlike some other processors.

Yes, I'm sure that's the best advice for forum posters. Heh, and even though I'm in the habit of using tricks to avoid turning interrupts off/on, I still don't see myself enabling interrupts in any ISRs on the Arduino.

If I ever did it? It'd probably be a system where I had 3 interrupts total: 1 for the timer, 1 that ran often and had to be sort of long, and 1 that was some sort of a safety signal that mattered when the long ISR was running. Without prioritization in the hardware, more than 3 ISRs would be difficult to manage with software prioritization (IMO).

Some compilers offer directives to control some of this, but yes it is something that you always need to check the assembly for to get its measurement. If you ever change the file, the compiler version, the compiler options, etc...you get to check it again. IAR Workbench is the compiler I've really been using for the past month (when not on these forums), and it seems really proud of its static analysis tools. Maybe one of those measures max latency to my ISRs.