I'm trying to minimize the rate of hang-ups for a remote aplication that involves the famous atmega328P microcontroller.
I've noticed 4 popular approaches poping up:
- use of the brounout detector
- use of the watchdog
- protect the sensors (if any) that stick out of the box into the electromagneticaly noisy environment
4... immunity aware programming
You can find literature about the first 3 easily, but I found next to nothing besides this wikipedia article: http://en.wikipedia.org/wiki/Immunity-aware_programming about the fourth.
Despite this, it kind of smells like a good idea and I would like to know if anyone has some experience with it. Did you find the time investment worthwile?
Jumping in the article referenced above raised some more questions that I hope someone will find interesting enough to comment or answer.
The article mentions two types of error management based on the instruction pointer (IP) The first one: token passing with global function is fairly easy to understand, but I kinda lose ground with the "token passing with function parameters" method. (example code: http://en.wikipedia.org/wiki/Immunity-aware_programming)
In this scheme, every fuction has an ID and is equiped with two additional parameters: the calle and the caller. Such function alway returs the caller (which I find cumbersome since all useful results now have to be returned by reference or global variables)
Also such function always checks two things: the callee and the caller of the fuctiontion that it is calling (itself). And this is where I get lost.
It seems to me that this method (in contrast to the first one) is relying on the fact that the the functin parameters are short lived local variables. and if that if the IP jumps to another part it is very unlikely that the variables will match.
If this is true, wouldn't this scheme be plausable:
return_type functionX(parameter_type parameter1,....)
{
int ip_flow_var=functionX_ID;
...
do useful stuff here;
...
if (ip_flow_var!=functionX_ID) do_software_reset();
else return useful_result
}
at the begining declare and initialize a variable that is unique for functionX; at the end check if the value is the same, meaning that the IP did not randomly jump. Of course all functions would have to be equiped with such checking.
I guess this strategy would work on two assumptions:
1.That if the same segment of memory, (as with the previous call of functionX) will bi checked, the value in that segment has changed (has ben rewriten by some other function) OR
2.That the location of the segment that is read has changed since the last call of function X
(considering the 1/range probabilty that the value of the variable will remain the same by chance)
Since this has become a very broad question I'm asking if anyone would point me to some compact literature that answers this questions about memory allocation.
What is the algorithym/rule of dynamic memory allocation in order to estimate various "immunity aware techiqes"?
regards
Petter