When compiling code the arduino IDE shows a figure for how much SRAM is taken up by global variables, the free amount being left over for local variables and function calls.How can one assess what is an appropriate amount of free memory to have for these purposes?
I'm trying to develop a piece of code for an ATTiny84, and as that only has 512 bytes of SRAM I've rather less to work with than the Atmega328p's 2Kbytes. So I want to know, as I'm developing the code, where to be keeping an eye out. I'm sureI can keep the global variables under the limit, but I don't know by how much yet.
I also hear than on the Atmega328P there is a requirement for atleast 128 bytes of SRAM to be kept free as a safety margin, if I have to keep 128 bytes free, on top of those needed to be kept free for local variables and function calls, then 512 bytes doesn't leave much left.
I understand that properly knowing the peak SRAM requirement comes pretty close to needing to be able to beat some of Turing's original hypotheses, "does the machine stop" and all that. But there must be ways to get a good feeling for how much free SRAM is needed.
Lets say that you make use almost exclusively of global variables*, any locals are nothing but a few uint8_t s scattered in one or two places in functions, are you then in a situation where so long as the amount of free memory available is greater than the number of uint8_t locals that exist throughout the whole program** one will be fine?
*this way they get counted for the stats at compile time, rather than being surprises which happen when running the code
**being clear to state that these locals are NOT all in the same scope, so there is no logical place in a program where you could actually need all of them at once. Hence would (free SRAM) > (total SRAM needed for all local variables, even though they don't all get used at once) guarantee stability?
SRAM bytes can also apparently neede for function calls, to what extent? Just one byte per layers of function, so if your deepest function is that void setup() calls uint8_t StartCalibration(uint8_t input1, uint16_t input2), and StartCalibration may make a call to void ToggleThePins(), is three bytes SRAM all this circumstance would take, for three nested functions? Or is SRAM required depending on the return types of the functions and the number of bytes required for the arguments they take?
Is there anything else that requires SRAM bytes to be free for use? I can count up global variables, local variables and function calls simply by ctrl+f ing for type names, is there anywhere that things tke up space in the SRAM without easily searchable defining lines like these? Assuming no use of malloc, and assuming all arrays used use a compile time constant for their size.
If (global variables)+(total bytes of all local variables)+(maximum level of nesting of functions, even if you have an interrupt going on whilst another function is deeply nested) is less than the total SRAM available then is everything alway fine?
Failing all else, if one writes code to entirely use global variables, no locals called at all, then so long as the sum of the global variables bytes and the count of maximum nested function calls is less than the total the chip can provide, are things good?
Thank You
P.S. the code is not ready yet, I'm trying to get an undertanding for this mattr as early as possible. This way I can have rules of thumb to guide me in development, rather than trying to debug a problem when I find horrible weird crashes happeneing because certain actions in a program might consume all the SRAM.