Go Down

Topic: Using an ATtiny85 as SPI Slave (Read 23289 times) previous topic - next topic

bobcousins

I was thinking of adding an SPI protocol to the cyz_rgb firmware, so I might have go with this.

However, I would really try to avoid putting delays into SPI transactions. The slave should return data it has already calculated, or the master should disconnect and then poll a status register until the slave indicates it is ready. Pausing the SPI transfer is somewhat impossible with a hardware SPI.

At a pinch, you could insert a padding byte or two into the transaction to allow the slave some time, but you always have to draw the line somewhere.

Please ask questions in the forum so everyone can benefit. PM me for paid work.

Tzimitsce

#31
Aug 29, 2014, 11:00 am Last Edit: Aug 29, 2014, 11:06 am by Tzimitsce Reason: 1
Quote
However, I would really try to avoid putting delays into SPI transactions. The slave should return data it has already calculated, or the master should disconnect and then poll a status register until the slave indicates it is ready. Pausing the SPI transfer is somewhat impossible with a hardware SPI.


It's not like you are waiting amidst the transaction, you're just waiting for the "second" transaction. I've seen many chips requiring this - and actually some  even has "is chip ready" commands for detection (which might be a better approach if your protocol is complex one).

I've did some tests with my code and seen that my approach actually works, but some changes are necessary -- especially when introducing the SS logic. I've did my tests with "disable USI when SS is high, and enable when SS is low" (with a pinChange Interrupt) but the master needed to wait for 15 microseconds, before I start the first transfer (after pulling SS low).If I put less than 10 microSeconds wait there, I was getting values shifted wrong or reading the data I sent back -- which shows that the slave didn't put all the data to the USIDR yet..  When AtTiny85 was running with internal 16Mhz PLL clock, I could maximum go with Div32 clock divider (which makes the transfer at 500 kHz) but whenever I removed the microSecond delay, It wasn't working with even Div128 mode (which is 125 kHz).

Besides, I've seen that "busy waiting" in ISR wasn't very efficient, so maybe "saving the last command and assigning USIDR according to this" approach by jabami might be better - since we're not busy waiting anywhere. ISR or not, busy waiting blocks the chip.

Another note, so far whenever I tried to read USIBR (buffer) instead of USIDR at the Overflow ISR, I see the values sent by the master shifted by one bit. I'll try to see what causes this, but I'm suspicious of the ISR and  the USIBR handling logic in AtTiny. I think USIBR is assigned a little delayed, so using it in Overflow ISR is impractical. I will examine more about this.

Tzimitsce

#32
Aug 29, 2014, 08:46 pm Last Edit: Aug 29, 2014, 08:56 pm by Tzimitsce Reason: 1
OK, so I think I've did enough :)

Below, you can find my ATTiny85 SPI Codes and Client code which uses the USI.

Basically, the interrupt on pin 3 activates or deactivates the SPI (SS pin) and then once activated, the overflow ISR reads the command, prepares the command buffer and then consecutive calls to the ISR simply reads this buffer until nothing on buffer left. Once SS is high, SPI functionality is off, so we clear the command buffer as well. This way, the user can interrupt SPI data transfer prematurely by pulling SS high.

With these codes, I could use DIV16 divider at Arduino mega as Master -- which signifies 1 MHz of clock speed -- better than before. I could even use 2 MHz but it wasn't quite consistent as 1 MHz.

On the SPI Master side, I've used the code below:

Code: [Select]
byte cmd;
unsigned int ret;
// The loop function is called in an endless loop
void loop() {
digitalWrite(SS, LOW);

delayMicroseconds(5); // REMARK 1
SPI.transfer('C'); // some command

delayMicroseconds(5); // REMARK 2

cmd = SPI.transfer(0xFF); // first, we read the "echo"

// then some readings:
byte *resp = (byte*) &ret;
for (int i = 0; i < sizeof(unsigned int); ++i) {
delayMicroseconds(5); // REMARK 3
resp[i] = SPI.transfer(0xff);

}
digitalWrite(SS, HIGH); // disable slave device
Serial.print("CMD: ");
Serial.print(cmd);
Serial.print(" Read value : ");
Serial.println((unsigned int) ret, DEC);
delay(100);
}


REMARK 1: In ATTiny85, Pin change interrupts are fired quite slowly. If you don't wait for it to be triggered, since USI is not initialized properly, you get weird shifts in sent and received data - since the USI buffers get mangled. This is the "command delay time"

REMARK 2: Once you sent the command, ATTiny makes some if-switch etc. checks and prepares the data necessary and puts it into the USIDR. IF you don't wait here, you start reading USIDR before ATTiny starts putting something into, which makes your response shifted -- causing you to read wrong data. This is "command acknowledge time"

REMARK 3: Consequtive readings, despite being relatively less, are still making some checks, so we must wait them. I've seen 4 microseconds is good enough, but 5 microseconds is even better. This is "command pulse time"

I think those delays are due to the Tiny processing interrupts and firing times. I've tried my best to avoid them, but I couldn't do it -- everytime i removed them, the data I received started to get weird. I guess when I'm using this chip, I'll use it this way.

At one point, at Tiny85 side, I was using "digitalRead" etc in ISR and the  "command delay time" required to be 15 microSeconds minimum or I couldn't communicate at all! It's why I've refrained from using "digitalRead-pinMode" etc. functions. They are quite slow compared to the Memory Mapped I/O.

So, please examine and give feedback. I've tried to write a general Master/Slave library like code, but didn't test the "Master" functionality at all, so be advised.

fungus


REMARK 1: In ATTiny85, Pin change interrupts are fired quite slowly. If you don't wait for it to be triggered, since USI is not initialized properly, you get weird shifts in sent and received data - since the USI buffers get mangled. This is the "command delay time"

REMARK 2: Once you sent the command, ATTiny makes some if-switch etc. checks and prepares the data necessary and puts it into the USIDR. IF you don't wait here, you start reading USIDR before ATTiny starts putting something into, which makes your response shifted -- causing you to read wrong data. This is "command acknowledge time"

REMARK 3: Consequtive readings, despite being relatively less, are still making some checks, so we must wait them. I've seen 4 microseconds is good enough, but 5 microseconds is even better. This is "command pulse time"

I think those delays are due to the Tiny processing interrupts and firing times. I've tried my best to avoid them, but I couldn't do it -- everytime i removed them, the data I received started to get weird. I guess when I'm using this chip, I'll use it this way.

At one point, at Tiny85 side, I was using "digitalRead" etc in ISR and the  "command delay time" required to be 15 microSeconds minimum or I couldn't communicate at all! It's why I've refrained from using "digitalRead-pinMode" etc. functions. They are quite slow compared to the Memory Mapped I/O.

So, please examine and give feedback. I've tried to write a general Master/Slave library like code, but didn't test the "Master" functionality at all, so be advised.


I think you're barking up the wrong tree.

The I2C protocol is designed to avoid all of those problems, and it uses less pins than SPI (which is important on a Tiny85).

No, I don't answer questions sent in private messages (but I do accept thank-you notes...)

Tzimitsce

#34
Aug 30, 2014, 02:14 pm Last Edit: Aug 30, 2014, 02:25 pm by Tzimitsce Reason: 1
Yeah, I was in quite dilemma whether to use which; but I went with SPI because I'm more familiar with it :)

Maybe I should switch :)

ADDENDUM:

Is I2C actually that different? The only difference is that there is a "protocol specified waiting" which is done by clock pulses instead of "delays". Still better, maybe, since it's defined by protocols but not that different.

Using less pins is still a good thing though :)

bobcousins

I2C has other problems, and is generally run slower than SPI, but works well on tiny85. I am actually using a tiny45 with 8MHZ internal clock for the OpenBlink (aka "BlinkM") type controllable LEDs, with cyz_rgb firmware.

A limitation of i2c is the number of addresses, unless we use 10 bit addresses, so I was thinking an SPI like protocol (two wire, no slave out) could be used. The WS2811 use one wire protocol, but with tight timing requirements.

Anyway, at about $1.50 or so per unit, the OpenBlink design is not cost-effective for lots of LEDS, I can get WS2812 LEDS for about 30c each, and a 328 can handle a long string of those.
Please ask questions in the forum so everyone can benefit. PM me for paid work.

Tzimitsce

Speed wasn't that much of an issue for me, but I've checked that I'm using I2C pins for other purposes at master Arduino, so using it is out of question for me - at least for now.

So, we're moving on with SPI :) Did anyone check the codes? Can you spot an "improvement" point? I feel like there should be some somewhere. I hate those delays :)

fungus

#37
Aug 30, 2014, 05:46 pm Last Edit: Aug 30, 2014, 09:58 pm by fungus Reason: 1

Is I2C actually that different?


Yes. I2C is a much more complete protocol than SPI.

eg. An I2C slave can hold the SCL line low when it detects a start condition. This puts the master "on hold" until SCL is released. The Tiny85 has a hardware latch for this so the software response time is never an issue, it can take as long as it needs (within reason) to respond to the start condition, set up the USI to receive the data, then release the master to send the data bits.

I think the start-condition detector can even wake up the chip from sleep mode. You can be sleeping and still respond to I2C transmissions.

(See bits 4+5 of the USICR register for details).



I2C has other problems, and is generally run slower than SPI,


"generally".

You don't have to go at 100kHz if you know you're talking to a Tiny85 @ 8MHz. The Tiny85 hardware will work much faster than that.


A limitation of i2c is the number of addresses


Needing a separate slave select pin for every SPI device seems like a bigger limitation to me.

No, I don't answer questions sent in private messages (but I do accept thank-you notes...)

Tzimitsce

Maybe you are right - I2C is better. But being a more "complete" protocol, I've seen it "harder to implement". There are some libraries out there but, to be honest, I couldn't even get myself into making them more optimized. There seems to be a lot of hassle in I2C for me. SPI is more familiar, and simple for my project here.

Besides, this topic is about SPI Slave usage of ATTiny, so maybe we mustn't polute here by "which one is better" debates.

BTW, I've uploaded a sample of my codes and some readme's-release notes to my github, in case anyone needs something.

PS: I'm still angry about those delayMicroseconds() though. Seriously, I can make the SPI library clock work at 4 Mhz , but when I trim a few delays, some reading become garbage. Duh..

fungus


Maybe you are right - I2C is better.


"Better" is the wrong word. It depends on the situation.

To me it seems preferable for an ATtiny85 slave - it uses less pins and has flow control to avoid timing problems.


But being a more "complete" protocol, I've seen it "harder to implement". There are some libraries out there...


I won't disagree.

(As for libraries, I don't think I've seen one which does a correct implementation of I2C...)
No, I don't answer questions sent in private messages (but I do accept thank-you notes...)

Tzimitsce

I tried to make a library for my case (can be seen on github on preliminary status). I've written just to the point where master device communicates, and slave device receives. Until to the point of "STOP detection" point, I've already eaten the 35% of the Flash rom. I didn't even start protocol part.

I think I'll quit that. I tried to make something "general" but it's almost improbable with USI. There needs to be some very application specific assumptions done in order to do something that works with USI-TWI.

It's not like I'll do something with the +2 pins anyway, they were to just float around.

PS: I've discovered why I need "delayMicroseconds" in code: The reason was the interrupts. Once I compile my code without ADC and Timers out, I can decrease the transmission delay to 1 microseconds - which is no big deal at all. But once I turn on the ADC and Timers, since those tick in almost every function, I need  to put some delays. No wonder data was being corrupted once a while but not always.

Go Up