Go Down

Topic: is a read/write from/to an uint8_t atomic? (Read 2650 times) previous topic - next topic

JosAH




You are enabling interrupts within an interrupt handler? That is a big red flag.


No it isn't, I know what I'm doing (<--- famous last words ;-) My interrupt function is triggered by a falling edge and then it starts decoding an IR signal from a digital pin; that can takes several milli seconds (I simply poll that pin in the interrupt function) and I want the millis() etc. interrupts to go on. At the start of my function I've detached it from the particular interrupt because I don't want it to be called recursively by another falling edge on that pin. b.t.w. I only enable interrupts after I've detached my function from that falling edge interrupt.


Instead of polling the IR receiver in the ISR, why don't you leave the interrupt attached, and at each interrupt:

- call micros() to get the current time
- calculate the time since the previous edge, so that you can decode the next bit
- store the time you got from micros(), ready for the next interrupt



That was my first approach; the code ended up a bit too messy: it was a state machine and it had to keep track of a bit too many state variables; I rewrote it and the code looks a bit cleaner now. It isn't cast in stone that this will be my final implementation maybe I'll go back to your suggestion if I can make a cleaner implemenation.

Thanks for replying and kind regards,

Jos

JosAH


Fair enough, but enabling interrupts inside an ISR is a last resort, as far as I am concerned.


Why is everybody so itchy when it comes to (re)enabling interrupts in an interrupt handler? I'm not longjmp( ... )ing around, I just want the micros() and millis() things to keep working (I have to keep track of a wall clock time (*)). Neither does my interrupt handler need to be reentrant ...

kind regards,

Jos

(*) yes, the device syncs its notion of time with an NTP server each day.

Nick Gammon

Let me turn the question around.

What is your objection to doing what the IRremote library does? It doesn't re-enable interrupts inside an ISR.
Please post technical questions on the forum, not by personal message. Thanks!

More info:
http://www.gammon.com.au/electronics

JosAH


Let me turn the question around.

What is your objection to doing what the IRremote library does? It doesn't re-enable interrupts inside an ISR.


I've seen that library; it records 'ticks' between state transitions and it uses 100 two byte ints for it (200 bytes!) and it decodes afterwards. I decode 'on the fly' for one particular protocol (NEC); reading 32 bits in that protocol takes +- 108 millis and I don't want to lose the millis() ticks. The 'header' pulse takes 560us and I might lose a millis() tick in there. Maybe if I use edge change interrupts (my first approach) I don't need to (re)enable interrupts inside my handler ...

kind regards,

Jos

Nick Gammon

You won't lose a millis() tick in 560uS - the interrupt is queued.
Please post technical questions on the forum, not by personal message. Thanks!

More info:
http://www.gammon.com.au/electronics

JosAH


You won't lose a millis() tick in 560uS - the interrupt is queued.


Thanks, I have to remember that; I don't think this queueing of interrupts has a buffer? If yet another interrupt happens, will it overrun the already pending interrupt or is it thrown away? As my decoding handler is now, it takes +- 100ms to complete and (so) +- 100 millis() interrupts have happened (and many micros()).

kind regards,

Jos

Nick Gammon

http://gammon.com.au/interrupts

I don't care what you say, 1/10 of a second inside an ISR is too long. Redesign.
Please post technical questions on the forum, not by personal message. Thanks!

More info:
http://www.gammon.com.au/electronics

JosAH

I sincerely disagree with the 'religion' that enabling interrupts in an interrupt handler should be a bad thing, but I redesigned (and re-implemented) my handler: it is interrupt driven by changing (rising or falling) edges (it is a state machine again, same as it was before) and I'm going to try both approaches; my main program loop has nothing else to do while a remote key is pressed (there's one user only) so the processor can be 'away from the job' for 0.1s while it's reading the IR remote, but doing the reading entirely interrupt driven doesn't hurt. b.t.w. nice article; I bookmarked it.

kind regards,

Jos

bperrybap

I'm with Jos with respect to the 'religion' of not re-enabling interrupts inside an ISR.
It can be particularly useful, and in certain instances, can be used as a fast form of task scheduling.

I've designed many products that actually did this.
Things like disk controllers in supercomputers, equipment in FAA towers, on the spaceshuttle,
and 10's of millions of aDSL modems. If you bought/used an aDSL modem from about 1998 to 2001 odds are high
you have a product using this type of technique since my company was supplying 90%+ of the worlds aDSL modems
during that time period.

In some cases, depending on the hardware design and overall system requirements, it can actually be required.
Think of a system that uses an NMI in a command/message/mailbox interface to ensure it can always communicate.
The NMI will always run, and when running it blocks out all the other "normal" interrupts.
In this type of system, the routine processing the NMI processing routine must quickly as possible fudge up the stack and essentially lower
the interrupt level to foreground level to allow all the other real time interrupts to occur again and get ready for the next potential NMI.
The NMI is essentially being used to schedule a task.
When the "NMI" routine returns, the code will return back to the real/original foreground code.

In my mind, what makes re-enabling interrupts for task scheduling more difficult and scary to many people
on the AVR is that the AVR is pretty wimpy when it comes to interrupts since it only has one level.
Many people don't like to think about potential re-entrancy issues.
The issues really aren't that bad to deal with and solve
particularly once you think of interrupts and ISRs as just another thread.

Think about other processors that don't have such a wimpy ISR structure.
They have multiple interrupt levels. This means you can allow
some prioritization in h/w without having to do s/w scheduling.
It is like having a mini free task scheduler at the hardware level.

People may say, yes but in that case the given interrupt level will never nest.
Well, ok, but with some simple code you can do this same thing on the AVR
it just takes a little bit of s/w.

Consider this use case.
I regularly see the "Blink without delay" technique that people are encouraged to look
at and adopt. My opinion is that there are better ways to solve the problem.
Particularly for the typical Arduino user.
One way is to use a  user timer interrupt that schedules their code.
The issue that immediately comes up is what if their code is long/slow.
It could potentially block other interrupts like the system timer since it would
be running inside the ISR.
But if the user timer interrupt code is smart enough, it can re-enable interrupts
so that the users code is not blocking interrupts. i.e. the users code is essentially
no longer inside the ISR.
There are some caveats, like what happens if the users code takes longer
than the user timer interrupt period, but those are also solvable.

I will agree that novices can quickly get themselves into trouble when doing something like this,
However, the point is, that in embedded environments without any OS or scheduler,
ISRs can be used as part of the scheduling mechanism, even for lengthier tasks.
This technique can be particularly useful in very realtime systems where having actual tasks with
context switches or an idle loop would simply take too long or not meet the needed
latency or jitter requirements of the overall system.

On the AVR, this type of technique requires re-enabling interrupts in an ISR
to allow other ISRs to continue to be processed.
It shouldn't be shunned just because it may be thought of as a "bad practice".
Done correctly, it's just not that big of deal.

--- bill



MichaelMeissner

#24
Feb 18, 2013, 10:33 pm Last Edit: Feb 18, 2013, 10:35 pm by MichaelMeissner Reason: 1

I agree with Bill. Setting/clearing one bit (where the bit is known at compile time) the compiler usually generates a single atomic instruction. However doing multiple bits, or arithmetic, is unlikely to be atomic.

That really depends on the underlying machine.  Some machines have an OR/AND to memory (the x86 for instance does, but it isn't suitable for us if multiple processors are reading the memory without the LOCK prefix), many machines do not.

Typically the most you can hope for is setting a particular variable that is declared volatile and is of the appropriate type and alignment to 0 or non-zero in an interrupt handler.  Setting fields in a structure, doing arithmetic or logical operations, etc. are generally not atomic.  In the case of many processors, you want to reserve an entire cache line and not have anything else of value adjacent to it.

Nick Gammon


That really depends on the underlying machine.  


I meant, on the Atmega328P, as empirically determined by examining the code generated by this particular compiler.
Please post technical questions on the forum, not by personal message. Thanks!

More info:
http://www.gammon.com.au/electronics

Go Up