Go Down

Topic: "Immunity-aware programming" (Read 731 times) previous topic - next topic

peter_

Sep 18, 2012, 01:16 pm Last Edit: Sep 18, 2012, 01:28 pm by peter_ Reason: 1
I'm trying to minimize the rate of hang-ups for a remote aplication that involves the famous atmega328P microcontroller.
I've noticed 4 popular approaches poping up:
1. use of the brounout detector
2. use of the watchdog
3. protect the sensors (if any) that stick out of the box into the electromagneticaly noisy environment
4... immunity aware programming

You can find literature about the first 3 easily, but I found next to nothing besides this wikipedia article: http://en.wikipedia.org/wiki/Immunity-aware_programming about the fourth.

Despite this, it kind of smells like a good idea and I would like to know if anyone has some experience with it. Did you find the time investment worthwile?

Jumping in the article referenced above raised some more questions that I hope someone will find interesting enough to comment or answer.
The article mentions two types of error management based on the instruction pointer (IP) The first one: token passing with global function is fairly easy to understand, but I kinda lose ground with the "token passing with function parameters" method. (example code: http://en.wikipedia.org/wiki/Immunity-aware_programming)

In this scheme, every fuction has an ID and is equiped with two additional parameters: the calle and the caller. Such function alway returs the caller (which I find cumbersome since all useful results now have to be returned by reference or global variables)

Also such function always checks two things: the callee and the caller of the fuctiontion that it is calling (itself). And this is where I get lost.

It seems to me that this method (in contrast to the first one) is relying on the fact that the the functin parameters are short lived local variables. and if that if the IP jumps to another part it is very unlikely that the variables will match.

If this is true, wouldn't this scheme be plausable:
Code: [Select]

return_type functionX(parameter_type parameter1,....)
{
int  ip_flow_var=functionX_ID;
...
do useful stuff here;
...
if (ip_flow_var!=functionX_ID)  do_software_reset();
else return useful_result
}


at the begining declare and initialize a variable that is unique for functionX; at the end check if the value is the same, meaning that the IP did not randomly jump. Of course all functions would have to be equiped with such checking.
I guess this strategy would work on two assumptions:

1.That if the same segment of memory, (as with the previous call of functionX) will bi checked, the value in that segment has changed (has ben rewriten by some other function) OR

2.That the location of the segment that is read has changed since the last call of function X

(considering the 1/range probabilty that the value of the variable will remain the same by chance)

Since this has become a very broad question I'm asking if anyone would point me to some compact literature that answers this questions about memory allocation.

What is the algorithym/rule of dynamic memory allocation in order to estimate various "immunity aware techiqes"?

regards
Petter








peter_

#1
Sep 18, 2012, 02:20 pm Last Edit: Sep 18, 2012, 02:22 pm by peter_ Reason: 1
I think I got part of the answer. The allocation algorithm is in stdlib...

To test various "immunity aware"schemes, I gues I could emulate occasional random jumps to the part of the function that is below the ip_flow_var definition and do the statistics...

Peter

DuaneB

Here is a bit from Atmel on programming for hazardous environments -

http://www.atmel.com/Images/doc9108.pdf

Duane B

rcarduino.blogspot.com
Read this
http://rcarduino.blogspot.com/2012/04/servo-problems-with-arduino-part-1.html
then watch this
http://rcarduino.blogspot.com/2012/04/servo-problems-part-2-demonstration.html

Rcarduino.blogspot.com


PGT

i am not 100% sure but here an advice if your device operates near people and might harm them. (note that arduino shouldnt be used in such environment)
however in such environments and as an addded saferty like such environments.

Besides electrical discharge, it it is for safety reasons then your sensor output to keep a machine going should always be on (provide signal).
In other words if you loose signal the device should stop whatever it is doing.
So even an "on" button cannt be a single push signal, and there cannt be a on signal to turn something off in a flow of behaviours.
no signal is then interprented as broken signal, and thus a reason to stop or to behave different.


Go Up