is a read/write from/to an uint8_t atomic?

Thanks, I have to remember that; I don't think this queueing of interrupts has a buffer? If yet another interrupt happens, will it overrun the already pending interrupt or is it thrown away? As my decoding handler is now, it takes +- 100ms to complete and (so) +- 100 millis() interrupts have happened (and many micros()).

kind regards,

Jos

I don't care what you say, 1/10 of a second inside an ISR is too long. Redesign.

I sincerely disagree with the 'religion' that enabling interrupts in an interrupt handler should be a bad thing, but I redesigned (and re-implemented) my handler: it is interrupt driven by changing (rising or falling) edges (it is a state machine again, same as it was before) and I'm going to try both approaches; my main program loop has nothing else to do while a remote key is pressed (there's one user only) so the processor can be 'away from the job' for 0.1s while it's reading the IR remote, but doing the reading entirely interrupt driven doesn't hurt. b.t.w. nice article; I bookmarked it.

kind regards,

Jos

I'm with Jos with respect to the 'religion' of not re-enabling interrupts inside an ISR.
It can be particularly useful, and in certain instances, can be used as a fast form of task scheduling.

I've designed many products that actually did this.
Things like disk controllers in supercomputers, equipment in FAA towers, on the spaceshuttle,
and 10's of millions of aDSL modems. If you bought/used an aDSL modem from about 1998 to 2001 odds are high
you have a product using this type of technique since my company was supplying 90%+ of the worlds aDSL modems
during that time period.

In some cases, depending on the hardware design and overall system requirements, it can actually be required.
Think of a system that uses an NMI in a command/message/mailbox interface to ensure it can always communicate.
The NMI will always run, and when running it blocks out all the other "normal" interrupts.
In this type of system, the routine processing the NMI processing routine must quickly as possible fudge up the stack and essentially lower
the interrupt level to foreground level to allow all the other real time interrupts to occur again and get ready for the next potential NMI.
The NMI is essentially being used to schedule a task.
When the "NMI" routine returns, the code will return back to the real/original foreground code.

In my mind, what makes re-enabling interrupts for task scheduling more difficult and scary to many people
on the AVR is that the AVR is pretty wimpy when it comes to interrupts since it only has one level.
Many people don't like to think about potential re-entrancy issues.
The issues really aren't that bad to deal with and solve
particularly once you think of interrupts and ISRs as just another thread.

Think about other processors that don't have such a wimpy ISR structure.
They have multiple interrupt levels. This means you can allow
some prioritization in h/w without having to do s/w scheduling.
It is like having a mini free task scheduler at the hardware level.

People may say, yes but in that case the given interrupt level will never nest.
Well, ok, but with some simple code you can do this same thing on the AVR
it just takes a little bit of s/w.

Consider this use case.
I regularly see the "Blink without delay" technique that people are encouraged to look
at and adopt. My opinion is that there are better ways to solve the problem.
Particularly for the typical Arduino user.
One way is to use a user timer interrupt that schedules their code.
The issue that immediately comes up is what if their code is long/slow.
It could potentially block other interrupts like the system timer since it would
be running inside the ISR.
But if the user timer interrupt code is smart enough, it can re-enable interrupts
so that the users code is not blocking interrupts. i.e. the users code is essentially
no longer inside the ISR.
There are some caveats, like what happens if the users code takes longer
than the user timer interrupt period, but those are also solvable.

I will agree that novices can quickly get themselves into trouble when doing something like this,
However, the point is, that in embedded environments without any OS or scheduler,
ISRs can be used as part of the scheduling mechanism, even for lengthier tasks.
This technique can be particularly useful in very realtime systems where having actual tasks with
context switches or an idle loop would simply take too long or not meet the needed
latency or jitter requirements of the overall system.

On the AVR, this type of technique requires re-enabling interrupts in an ISR
to allow other ISRs to continue to be processed.
It shouldn't be shunned just because it may be thought of as a "bad practice".
Done correctly, it's just not that big of deal.

--- bill

That really depends on the underlying machine. Some machines have an OR/AND to memory (the x86 for instance does, but it isn't suitable for us if multiple processors are reading the memory without the LOCK prefix), many machines do not.

Typically the most you can hope for is setting a particular variable that is declared volatile and is of the appropriate type and alignment to 0 or non-zero in an interrupt handler. Setting fields in a structure, doing arithmetic or logical operations, etc. are generally not atomic. In the case of many processors, you want to reserve an entire cache line and not have anything else of value adjacent to it.

MichaelMeissner:
That really depends on the underlying machine.

I meant, on the Atmega328P, as empirically determined by examining the code generated by this particular compiler.