Linker not complaining when overusing RAM by factor of 4?

I am porting a project that was working for several other targets, such as avr644p and even some ARM targets into working on a arduino mega 2560. Each platform configuration was set up using special config files that would be parsed by a perl script to generate make files with conditional defines for various features.

Now the problem was that I hade forgotten to disable a feature that would lead to a static array in RAM being compiled that would take up 12000 bytes.

Well that surely is nearly four times the size of the atmega ram which is 4K. This actually compiled and linked without complains as far as I could see (I never checked the map file).

So I got a lot of weird effects, global variables that when assigned would on the next statement have a different value etc. It was just confusing, and I had no idea about this array.

But now I wonder, why didn't the linker complain? It knows what avr CPU architecture I have specified. What does it do when it generates the data that should belong to .bss?

Shouldn't it discover these problems during link time instead of running into really weird runtime problems? Its not a matter of a few bytes overuse, its a factor of four of the total RAM available. And then no stack counted and no other variables.

Is it some compiler/linker flag that I should specify or what?

I don't believe it checks RAM usage - that is your job. In some chips, such as the Mega1280/2560 it is actually possible to add an additional 56kB of RAM using the XMEM interface, so how would the compiler know about this change.

You can actually check the RAM usage using the avr-size utility/command. I think later versions of the IDE do that, or at least modified versions do. Alternatively there are other IDEs such as UECIDE which will run the command automatically.

Tom: Thanks (and specially thanks for the avr-size tip!). Ok, but isn't it somewhat absurd. Maybe that should be possible to specify somehow, say if you have a board (or CPU) type specified then it should default to that CPU's unexpanded capabiltiy. But if some external ram is available that can be mapped to the address bus it could be some separate switch or flag to either override or expand the default.

As it is now it seems like it can take anything and just put everything in whatever section things should go. I can understand that stack can be problematic during link-time, but .bss and .data and all .init sections feels like it could be controlled at least to some extent.

And...out of pure curiousity. What actually happens in my case? The init code would take what ever comes first, and zero things out, take next variable and keep on going and then wrap around the RAM area several times until done, or will it even go outside of valid memory area and write into invalid ram (guess it can be the same thing if upper address lines are not significant)?

And the code would be compiled to offset/resolve several variables into same memory address?

The arduino mega 2560 allegedly has 8k bytes of ram, not 4k.

Still doesn't explain how you can have a 12k array in it, though.

Essentially yes, all the RAM would be zeroed, but I think all of the registers would be too - as they are mapped into SRAM. I can't say what would happen after that, but in all likelihood the processor would either crash and reset ad infinitum.

The other variables would have an address beyond the range of the memory, but because there are only limited bits used for addressing the RAM, the upper bits of the address would essentially be truncated and so the location would wrap around to some unintended place - at best somewhere near the bottom of the heap, at worse in the register map or at the top of the stack. If they go in the latter two, then writes to them could have unintended consequences, most likely leading to the processor resetting.

If by some magic it got passed all that and started running your program with no obvious side effects, then you might be fooled into thinking all is well. But as the large variable extends through the stack pointer, that would now point at the bottom of the RAM (as it got zeroed too) which again would most likely lead to a crash. Even if it didn't, if you then started writing stuff to the variable, who knows what that would do. You'd be overwriting your stack, other variables in the heap, possibly registers. All hell would break lose and the processor would probably crash.


@michinyon, you can have a 12k variable if you say used the XMEM interface to add the additional RAM. But without it, all that will happen is you end up with unexpected undefined behaviour which would most likely cause a crash.

Larswad: But now I wonder, why didn't the linker complain? It knows what avr CPU architecture I have specified.

Yes. And the AVR Atmega architecture supports up to 64 KB RAM. External RAM.

Though you would have to connect "external RAM" to your controller and it would cost you a lot of pins used on the controller: It is possible to use "external RAM" up to 64 KB with the AVR Atmega architecture.

So why should the compiler or linker complain? Neither of them knows what external hardware you have connected to your Atmega controller.

@michinyon: my bad (yes, it is 8KB, it was just a mindmelt on my side).

@Tom Carpenter: Thanks for the long explanation about what typically happens. To be honest, it was something like that I imagined, though I'm not experienced in the avr architecture. So I sort of needed that explanation. I think it actually got pretty far during the initialization until it start to initialize some variable that was extern'ed from one translation unit to another. The second one printed out the value over serial after it was initialized (and was different from that value, that tells me stack or something had already destroyed the value, OR that the code was referencing it outside of valid address range giving random values back)..

@jurs: Ok, I see the point. But as an Atmega family in a whole I can agree the linker cannot foresee the actual size (except maybe for that 64KB 'family'-size that you mention). But the commands to avr-gcc and linker is more specific than that. It is not only 'avr6' but is is the -mmcu= being set to atmega2560 (If i remember right). That should make it able to set some restrictions, or? Then (as michinyon corrected me) it would know about 8KB. I'm not going on about this just to complain, I just think that if there was a possibility for these limits, it would be very nice to have them. Because when this happens, so utterly chaotic things start to happen and it is easy to forget that breaking the limit could be the reason.

EDIT: No, you are right, it cannot know about any external RAM, you are right in that. But why not having a switch to the compiler or linker then that specifies / enables this (with the actual size).