Sorry, what do you want to accomplish with this local variable?
If you substract from the address of a local variable (&ptr) the address assigned by malloc() - you can get any huge value.
Example:
- stack is on region 0x20000000 (DTCM RAM)
- malloc uses SRAM, e.g. 0x30000000
0x20000000 - 0x30000000 gives you a really huge value: 0xF0000000 (all the other space as "wrap around".
This is not what you want to know.
Could you use the info from linker_script.ld, e.g.:
__HeapLimit
Not tried, but something like:
extern unsigned long __HeapLimit;
int *ptr = new(int);
size_t available = (size_t)__HeapLimit - (size_t)ptr;
free(ptr);
Actually, even this will FAIL:
what if your memory is already fragmented, by so much use of malloc()? There is no guarantee that the segments on heap memory are linear. You can find a free space, here for just 4 byte variable anywhere in heap memory. But the tail is allocated. This math operation would give you a wrong result.
Actually: using malloc nobody can really tell you how much is free. It can happen that the memory is so fragmented after a while, might have in total still 2 KB free, but these 2 KB are spread over many different segments. You would not be able to allocate 2 KB as one single chunk.
The correct question to ask malloc is actually: what is the largest block of memory I could still allocate (not how much is free, in total - not the same info)?
You have to check the return when calling malloc(). If you see you cannot allocate anymore 2 KB - you hit the case that memory is meanwhile too fragmented. It does not mean, 2 KB are just left: you can have still N times 1 KB blocks still available, just 2 KB as one chunk is not possible anymore.
Personally, I try to avoid to use malloc. RTOS provides functions for MemoryPool which are much safer (in terms to avoid memory fragmentation).
Or I use my own MemPool implementation: it is based on fix size chunks. Memory is managed in these chunks. There is always a guarantee that a chunk is still available if not really all chunks are allocated. And the chunk size correlates with the most often used size, e.g. for dynamic buffers, e.g. needing most of the time as 4K.
In my own implementation of MemPool I can track the use: how much is free, what was the peak (watermark) on use, how much MemPool segments are still in use...?
Here a simple MemPool implementation (I use in all my projects):
MEM_Pool.h (2.2 KB)
MEM_Pool.c (4.4 KB)