A simple Google search by "arduino malloc" will return all sorts of warnings. I understand that on a uC system where minimal resources are available it's hard/overkill to implement a proper memory management system. So memory fragmentation can (and will occur) if you start running multiple realloc() at every cycle.. for instance (pseudo-code):
uint8_t* p;
uin8_t p_size = 0;
for(;;) {
if( Wire.available() ) {
// Increase pointer memory
uint8_t* np = (uint8_t *) realloc( p, sizeof(uint8_t) * (p_size + wire.available()) );
if( np != NULL ) { p = np; }
[ do random stuff with data at *p ]
[ call realloc() to shrink *p after data has been processed ]
[ free() what ever needs to be freed ]
}
}
Run this for a while and SRAM sooner or later will become fragmented and the program will crash or just start doing stuff it isn't supposed to do. As far as I understood the problem is that malloc(), realloc() and such need to reserve a continuous block of memory and if there is none it can return a block already with data on it leading to memory corruption.
My question is what if we do not try to shrink memory ? Increase *p to store more data when needed but never decrease it, instead just clear (NULL or 0x00) the bytes that were processed, reusing the already allocated memory. Will still be bad ?.. If so why ?
fuh:
My question is what if we do not try to shrink memory ? Increase *p to store more data when needed but never decrease it, instead just clear (NULL or 0x00) the bytes that were processed, reusing the already allocated memory. Will still be bad ?.. If so why ?
How do you "increase *p"?
What was the point of clearing (NULL or 0x00) the "bytes that were processed"?
PeterH:
What was the point of clearing (NULL or 0x00) the "bytes that were processed"?
OK.. here I could have written "remove x bytes from p_size to reflect the data that was processed/removed/whatever on *p". But the general idea is that the allocated space of *p will be reused to store again data and never shrunk.
fuh:
My question is what if we do not try to shrink memory ? Increase *p to store more data when needed but never decrease it, instead just clear (NULL or 0x00) the bytes that were processed, reusing the already allocated memory. Will still be bad ?.. If so why ?
Well, because say you allocate 5 lots of 20 bytes, and then free the first one. The allocator doesn't try to "shrink" that per se. But it also can't make it larger because the first block will be adjacent to the second one. So to allocate (say) 30 bytes it has no choice but to allocate more to the end.
If you can allocate more memory while running, you need that space available. So rather than fragmenting your ram, just create a buffer as big as you need to begin with. then create/move/modify/destroy objects in that space.
pYro_65:
If you can allocate more memory while running, you need that space available. So rather than fragmenting your ram, just create a buffer as big as you need to begin with. then create/move/modify/destroy objects in that space.
Powerful you have become, the dark side I sense in you.
Is it a "easy way" to know the number of values stored at o_Array without doing accounting ?
Your lack of faith in the designers of memory allocation is ... disturbing.
The fact is that existing malloc systems are carefully designed to minimize fragmentation, re-use and combine freed block, and generally the best they can.
Your pseudo-code appears to show ideas that already inside the existing system probably are.
Google "malloc" or similar and a lot of work on the concept has been done, you will see.
A call to realloc() first determines whether the operation is about to grow or shrink the current allocation. When shrinking, the case is easy: the existing chunk is split, and the tail of the region that is no longer to be used is passed to the standard free() function for insertion into the freelist. Checks are first made whether the tail chunk is large enough to hold a chunk of its own at all, otherwise realloc() will simply do nothing, and return the original region.
When growing the region, it is first checked whether the existing allocation can be extended in-place. If so, this is done, and the original pointer is returned without copying any data contents. As a side-effect, this check will also record the size of the largest chunk on the freelist.
If the region cannot be extended in-place, but the old chunk is at the top of heap, and the above freelist walk did not reveal a large enough chunk on the freelist to satisfy the new request, an attempt is made to quickly extend this topmost chunk (and thus the heap), so no need arises to copy over the existing data. If there's no more space available in the heap (same check is done as in malloc()), the entire request will fail.
Otherwise, malloc() will be called with the new request size, the existing data will be copied over, and free() will be called on the old region.
The problem is that my real-world experience was not quite similar.. I found out that using grow/shrink procedures often resulted in random halts at runtime. And searching for similar problems a lot of people just say to stay away from dynamic memory allocation inside a uC system thing that I just can't understand. I mitigated my issue by not shrinking allocated memory and just procedural treat it has free with the help of accounting.
At the end of the day I was seeking for feedback from more experienced C/C++ programmers than I am.
As far as I can tell Arduino still uses avr-libc 1.6.4 (my arduino's libc.a is dated from 2008, and avr-libc-user-manual.pdf states 1.6.4), there is a documented bug related with malloc() and free() fixed at avr-libc 1.7.
OK, so the real point is that there is the bug in malloc. I seem to recall this being discussed before, and you are right, the library appears to be quite old.
The logical thing to do, it would seem, would be to fix the bug, then everything else should flow from that. I'm not sure how to rebuild libc.a, perhaps someone else can comment on that, or where a new version might be found?
A common problem with dynamic memory allocation is fragmentation, for example when your application starts, the system has say 8K available....you then allocated 1K for your own uses, after your down you free that area of memory. Unfortunately you don't get back the 8K you start with, but two areas 7K and 1K, both freely available but you can no longer allocate 8K of memory as it isn't available as a contiguous block.
Over time the memory becomes very fragmented, you still have 8K free, but in lots of small blocks that are not allocatable in one large block. This was a common problem on PC's. Not sure about Arduino, haven't tried it.
Years agao I wrote a control system that was used 24/7 and initially it suffered from this problem where eventually it would fail as it ran out of memory. So I removed use of malloc, calloc and realloc and instead implement my own system.
This would allocate a large block of memory at application start-up then divide it up as required using my own versions of malloc and free. The advantage of this was that I could get back entire blocks of memory without the fragmentation.
If fragmentation is a problem on Arduino then it should be quite straight forward to implement this kind of memory management.
That's a question of updating the WinAVR installation, does arduino's core team have a good reason for not doing this ?
But I would say that the issues I was facing could be better explained by the following comment from SPlatten.
SPlatten:
A common problem with dynamic memory allocation is fragmentation, for example when your application starts, the system has say 8K available....you then allocated 1K for your own uses, after your down you free that area of memory. Unfortunately you don't get back the 8K you start with, but two areas 7K and 1K, both freely available but you can no longer allocate 8K of memory as it isn't available as a contiguous block.
SPlatten:
So I removed use of malloc, calloc and realloc and instead implement my own system.
This would allocate a large block of memory at application start-up then divide it up as required using my own versions of malloc and free. The advantage of this was that I could get back entire blocks of memory without the fragmentation.
But what's the reason behind the 8K block not the be freed as a continuous block, is it due to the malloc's internals ? So if it's possible to write a wrapper around malloc() to better handle memory, couldn't this be incorporated into avr-libc malloc() itself ?
I'm not sure why, I always thought it was something odd about Microsofts C compiler on the PC...but I've since encountered the same problem with gcc on Linux and also with C compilers on QNX.
fuh:
But what's the reason behind the 8K block not the be freed as a continuous block, is it due to the malloc's internals ? So if it's possible to write a wrapper around malloc() to better handle memory, couldn't this be incorporated into avr-libc malloc() itself ?
Well I don't know that he is right. Certainly fragmentation can occur depending on the order of allocating/freeing. But in the given example? Looking at the source for avr-libc 1.7.1 which I seem to have downloaded at some stage, I see these comments in free():
/*
* Trivial case first: if there's no freelist yet, our entry
* will be the only one on it. If this is the last entry, we
* can reduce __brkval instead.
*/
So if you allocate 8K and then free it, you should be back to no freelist.
Then:
/*
* Now, find the position where our new entry belongs onto the
* freelist. Try to aggregate the chunk with adjacent chunks
* if possible.
*/
Then it tries to merge the current freed block with earlier ones.