My understanding is that a fair number of people here are using my core, so I thought I'd start a new thread to let people know about this release, and to encourage people to bang on it and find bugs that may have slipped in.
Link time optimization is now supported (enabled by option in tools menu) - this only works if you have AVR Boards 1.6.13 or later installed (this ships with 1.6.11 and later of the IDE).
The documentation has also been greatly expanded - there's now a documentation page specific to each supported chip that covers specific concerns relevant to that chip, and pinout diagrams (courtesy of Hansibull) for all the supported attiny's
The missing ADC reference option for the x5 series is now available, and you can now use the differential ADC channels by passing larger numbers to analogRead()
This also brings a huge number of bug fixes, and probably some other assorted stuff I forgot about.
That's great! Keep the good stuff comin'! I don't think people realize how great the LTO really is, especially when the flash memory is small. I created a simple LTO comparison table to convince users that LTO is the real deal
I recently get a hand on some of the new ATtiny chips (814 and alike) and was trying to see how I could leverage them, so I was wondering if there is any plan to add them to the supported list?
At some point, but it's a huge job. The peripherals are all different from the rest of the AVR product line, so everything has to be rewritten for it, and they will have lots of library comparability issues.
Yeah, it's a real bummer. It looks like they basically stuck PIC peripherals onto an AVR core - which of course throws out one of the major advantages of AVR, that being how libraries are portable across the product line without significant changes.
The new peripherals look so awesome, too It's the kind of datasheet you need to lean back while reading to avoid getting drool in your keyboard.
AVR + Pic. New and improved = combined misery. I guess since Microchip bought Atmel, they get to write the new music :o
I'm quite intrigued by LTO. Can anyone explain how it works from a high-level viewpoint?
Is it a simple case of removing any unreferenced code at the subroutine level rather than at the module level? I'm always curious when there is a "free" improvement in code size.
avr_fred:
I'm quite intrigued by LTO. Can anyone explain how it works from a high-level viewpoint?
Traditionally, a C(++) linker is nearly brain-dead. It performs just a small number of not particularly valuable optimizations like dead code removal and reordering so long jumps can be changed to short jumps.
A C(++) compiler is just the opposite. A modern compiler is able to dramatically change the generated code to squeeze out nearly every inefficiency.
LTO enables the linker to perform some / many / most / all of the same optimizations as the compiler. Instead of getting simple crude linkable-objects the linker essentially gets the internal data from the compiler.
Traditionally there was no such thing as a "C(++) linker".
It was just a linker. The language or tool used to create the object was irrelevant. The linker just linked the objects together by filling in and resolving the needed symbols.
It followed the "unix philosophy" (Unix philosophy - Wikipedia)
LTO enables the linker to perform some / many / most / all of the same optimizations as the compiler. Instead of getting simple crude linkable-objects the linker essentially gets the internal data from the compiler.
The unfortunate side effect is that the final object code can be unrecognisable and nearly if not totally impossible to debug using a source level debugger.
Things like collapsing multiple levels of functions away to form inline code is common even if they were specifically in separately compilable compilation units (like library functions).
Another unexpected side effect is that tiny tweaks to the source code can make dramatic object code output changes including fairly substantial changes to the overall size of the final image.
This is because it can affect the optimizations done.
Yes, this is also true when the compiler does its optimizations, but from what I've seen not nearly to same extent.
I've seen cases were code size explodes due to the compiler/linker "optimizing" away many functions and choosing to inline them in multiple places. Yeah the code might be a little faster but in some cases it isn't worth the extra code size.
I've seen cases where depending on how your code is written, the timing of functions like digitalWrite() can vary substantially. i.e. from 6us down to 4us. All depending on how the sketch code is written.
A tiny tweak how a local is used can trigger massively different optimizations and generate significantly different code in terms of speed and size.
First, thanks for a great job. This is very useful for my projects, which use ATtiny85 whenever possible.
It seems as if WDT constants are not defined. Things like SLEEP_MODE_PWR_SAVE.
I haven't tried stuff like
WDTO_15MS, WDTO_30MS, WDTO_60MS, WDTO_120MS, WDTO_250MS, WDTO_500MS, WDTO_1S, WDTO_2S, WDTO_4S, WDTO_8S
because past implementations seemed buggy (at least for ATtiny), so I have gone over to my own definitions.
Iirc those constants aren't supplies by the core, but by the avr libraries supplied with the compiler.
I don't normally use those libraries - they're just wrappers around a few register writes, and you need to read and understand that datasheet section anyway
I am using a library called "DIY ATtiny" which I believe is a fork of ATTinyCore on the Attiny13A.
My goal is to create a door sensor(alarm) - the Attiny13A will talk to an ESP8266 which will receive the status of a door(open/close) using an interrupt handler on the Attiny13A and also battery voltage status from an ADC from the Attiny13A. The communication will be through a serial connection.
I can't get the emulated serial connection on the Attiny13a, however, to work. Any suggestions why it does/will not work?
void setup (){
Serial.begin(9600);
Serial.println("Setup is over");
} // end of setup
My core does not support the tiny13. You should contact the author of that fork - I do not expect that the serial emulation like I do on my core would work on the tiny13 without significant modification; a software serial implementation would take up a significant portion of the available flash on the tiny13 - the terribly constrained flash is why I do not support the tiny13 with my core.