I have successfully written a solar system control sketch using a Uno but I really need a couple more analog inputs to do some extra monitoring. The existing programme only leaves 400 bytes free ram on the Uno, it has already been carefully optimised to reduce the size and I don't have room in the box to use a Mega so I was going to go down the Leonardo route until I saw that it had a 4K bootloader.
I know I can use a programming module or even another arduino to be able to use all the 32K programme space but will I lose the ability to use Serial.print..... for debugging if I zap the bootloader?
As best I know, the bootloader completely ceases to operate in any manner at all as soon as it starts the object code. When you compile a program that uses Serial.print, that code is compiled as part of the whole image that then gets downloaded into the Arduino.
The bootloader is not a "BIOS" in the sense that such was used prior to Linux and Windoze, to which function calls could be made. The Arduino code you compile resembles a multiprocessing OS insofar as it provides all its own "drivers".
It's just as well, as a structure - static vector table or interrupt service call - is necessary to access BIOS functions, and that introduces massive compatibility considerations (which is again why it is not nowadays used in or separate to multiprocessing OS).
If therefore you compile code and load it over the bootloader using the ISP, you will achieve your end with no problems. On the other hand, I would be disinclined to necessarily concede that it is "carefully optimised to reduce the size" without review by an independent skilled coder. XD
Thanks to Paul__B you've confirmed what I hoped to be the case, but I did not want to spend £ 20 only to discover that I would need to buy both a Mega and a new box to put it all in. I still don't understand why the bootloader for the Leonardo is so big Tough
Yes Peter I know programme, SRAM and EEPROM are three different things, The programme on the UNO runs to the 31K mark until I take out some debugging code. Add to that there is only about 600bytes of SRAM left when its running. However it is doing a lot of things, reading six temperature probes (which I would like to be 8 - reading the extra 2 will put 3 more lines of code in the programme), deciding which valves to open/close, which heat sources to use, dumping all the data to a log file on an SD card (the log runs up to 400K per month). It also outputs real time data to a web page, takes thermostat settings input from another Web page and allows the data files to be downloaded to a PC over the Ethernet.
By the way I started coding computers in the days of 5 hole punched tape. The luxury of moving from a computer that had just 32 flipflops to store temporary values to a computer that had 2k of magnetic core store you just could not appreciate.
still don't understand why the bootloader for the Leonardo is so big
The Leonardo uses a boot loader that also has LUFA which provides USB-serial over a virtual communication channel.
Erasing the boot loader erases LUFA, so if you need serial, you must fallback on the physical USART or use Software.serial ... Unless you already linking this library, your flash size will increase. This leaves the USART for serial comma if required. If USB, then you must go the FTDI serial or compatible solution.
I believe freetronics has a Uno compatible form-factor board that has much more ram (FLASH , EERPOM and SRAM) but it fairly pricey.
Freetronics Goldilocks: SRAM 16kB, Flash 128kB, EEPROM 4kB, Clock 20MHz, ATmega1284p MCU
I think that ti was limited production but there was a recent post that stated they still had a few: US$69
CrossRoads in our forum has a great Mega1284P board that is UNO footprint... I own 3. He was recently getting up a set of orders although I have not followed the current status.
No doubt, the 1284P is superior in SRAM and FLASH and I/O. It would be my choice.
The Op stated he was memory constrained in "ram" but I suspect he means "flash program storage." In any case, the 1284 is the answer, but at a cost.
I hesitate to bring this up as but you talked about removing debug code to get more room. I was wondering if you're using #define#ifdef#endif for debug. Expanding on that if you had like #define debug1, #define debug2 ... you could comment out all but one #define and do sectional debug of the code. Wouldn't the preprocessor just send the active portion to the compiler?