Hi everybody!
Remember Smeezekitty's core13, which was an add-on to get support for ATtiny13 within the Arduino IDE? Well, it got some major flaws, such as no accurate timing, interrupts every 256th clock cycle, non-optimized core and poor documentation. When it was released in 2012 I played a lot with it, to figure out its pros and cons, and I concluded that the core just wasn't mature enough.
A lot have happened since. The development on core13 seems to be abandoned, and a lot of poorly documented, not well optimized forks have popped up since. I've also become much better in Arduino and AVR programming! My idea is to leave everything behind and make the ATtiny13 great again!
Why bother to use a microcontroller as small as the ATtiny13?
It's dirt cheap (we're talking cents here!)
They come in both SOIC and the breadboard friendly DIP package
They're pin compatible with the ATtiny25/45/85
You'd be forced to learn how to write code more efficiently
So what's so great about the promising MicroCore?
It got accurate timing implemented (delay, delayMicroseconds)
The millis() interrupt is now caused by the WDT, which frees up the one and only Timer0
Possible to disable core functions through a separate core file, to save space. The core even got a "safe mode" that's possible to disable to save even more space!
Like my other cores it got AVR keywords highlighting. Try writing DDRB or PORTB in the IDE, and you'll understand
External interrupts using attachInterrupt() is supported
Well documented and 100% up to date - It supports the latest version of Arduino IDE
Boards manager URL
Link time optimization (LTO) for further code optimization
This sounds awesome, how to I install this core?
You can either install it the manual way, or using the boards manager URL.
For instructions on how to install the manual way, click here! This is the recommended way to install if you want to do changes to the core settings.
*For instructions on how to install using the boards manager URL, click here!
Hello i just tried your core on a few of my running projects and almost all seems to work as it should.
The only problem are memcpy i use it on a transmitter to convert a struct to a array to send. On smeezekitty's core its working but when i try it on your core the array gets filled with seemingly random numbers.
if i copy in a memcpy function directly in the sketch all is well but that takes some extra space....
Are memcpy supposed to work or are the core to minimal to have it supported?
That's weird.. the memcpy function is not something I've ported, it's included in the avr-libc library. Please post your code, so I can help you debug it.
Overall, this core is much more lightweight and brings you more functionality than Core13
OK here is the main part of the code, i added the complete code and the receiver code below
This is a minimal sensor transmitter with attiny13 and nrf24l01 that can last 1 year+ on a single 2032 battery, it takes adc readings on a thermistor and a voltage divider.
I'm not at my computer right now, so I can't test this right now. I see you use the WDT, which is used to increase the millis counter. Try commenting out #define ENABLE_MILLIS in the core_settings.h file
I agree my code are a bit messy... i plan to work on it, but there is so much new things to try so i never get around to fixing working code up...sorry
Anyway i tried your blinking led way to debug it and it worked kinda.
Somewhere between delayMicroseconds(100); and delayMicroseconds(200); the code stops working.
So if i use delayMicroseconds(100); and shorter, and delay(); for the rest all is working perfect.
Here is a minimal test code that show the problem:
#include <string.h>
struct dataStruct{
int16_t adcc_int;
int16_t adcc_stand;
uint32_t counter;
byte id;
}myData;
#define TX_PLOAD_WIDTH sizeof(myData)
unsigned char tx_buf[TX_PLOAD_WIDTH] = {0};
int main(void){
myData.id=7;
// set TX mode
DDRB |= _BV(PB3);
while(1) {
delayMicroseconds(100); //this works
//delayMicroseconds(200); //this do not work
memcpy(tx_buf, &myData, sizeof(myData));
for(uint8_t i = 0; i < tx_buf[8]; i++)
{
PORTB |= _BV(PB3);
delay(200);
PORTB &= ~_BV(PB3);
delay(200);
}
delay(3000);
}
}
Either this is a really strange problem or I'm doing something wrong?
I found the problem, but I need to do some more research before I can figure out exactly what's wrong. Have a look at this code line [MicroCore Github wiring.h]. If you use a number less than 199, it works just fine. The problem is the uS_new() function which is a few leftovers from the old core13. I'm no assembler guy, so this function is like black magic to me What version of core13 where you using? (og er du faktisk svensk? )
I decided to replace the delay() and delayMicroseconds() functions with two macro wrappers that's based on _delay_ms() and _delay_us(). Doing this saves ~60b, which is a lot on an ATtiny13. Now everything should work just fine
It seems like _delay_ms() and _delay_us() are inline functions, so by using the macros many times in the code, a lot of flash space is getting occupied. I'm working on a new solution, and will be ready soon
Thanks for reporting issues like these. It forces me to create even better code
I was using core13_19 that was the version what always worked best for me, But think this one will replace that as my go to core for attiny13.
For now i just removed uS_new and live with the inaccuracy of the old delayMicroseconds.
I really like your boards file i added options for enabling PB5 as i/o pin and added 2 versions of your core one minimal and one with all core_settings enabled and the old core13_19 too, that makes switching between cores so much easier. (and yupp swe here...)
Thanks! I believe I've found the best solution when it comes to handling the different delay functions.
To prevent an inaccurate delayMicroseconds() function, I'll keep the wrapper macro. The only issue is that the _delay_ms() function is an inline function, so calling delayMicroseconds() multiple times will eat up your flash memory, but timing is really accurate!
To prevent it from "growing" when called multiple times. Running a while loop like this only causes an overhead of ~1 us per ms, so it's really not a problem. The old core13 had horrible timing, and delay(1) was actually 1.3ms because of the rapid interrupts, caused by the millis() timer.
The "new" delay function is actually two bytes smaller than the old one. Seems like the compiler like the do while structure better if I'm writing ms-- instead of --ms without LTO enabled, the code actually gets 4 bytes larger!