I have a half duplex RS485 Master - Slave network using MAX485 ICs. I have found that I need to keep the Driver Enable pin HIGH after Serial.write(), otherwise the ATmega328 hangs after 5 minutes or so. I have experienced this behavior when using a delay of 1, but If I change it to 20, it seems to be OK (17+ hours and counting).
What is the correct value I should be using? and how is this worked out?
Also why would the ATmega328 crash like this? I would just see the TX_LED stay on, and the slave stop responding.
#define TX_LED 13
#define DRIVER_ENABLE 2
byte address1 = 0x31; //1
byte address2 = 0x32; //2
void setup()
{
pinMode(TX_LED, OUTPUT);
pinMode(DRIVER_ENABLE, OUTPUT);
Serial.begin(9600);
}
void loop()
{
sendPacket(address1);
delay(1000);
sendPacket(address2);
delay(1000);
}
void sendPacket(byte address)
{
digitalWrite(TX_LED, HIGH);
digitalWrite(DRIVER_ENABLE, HIGH);
Serial.write(address);
delay(20); //Delay to allow transmission.
digitalWrite(TX_LED, LOW);
digitalWrite(DRIVER_ENABLE, LOW);
}
If you don't assert the DE pin for one byte time after you load the UART you'll lose the last byte which may cause a problem to the receiving device.
However as all you are doing with that code is transmitting you could rip the entire circuitry out and it shouldn't affect how the program runs.
So based on what I've seen so far I can't believe that the delay makes any difference.
Is there more to the story?
Rob
At 9600 baud it will take 1/960 seconds to transmit one byte, namely: 1.041 mS. The transmission is done by an interrupt after it is placed in the buffer, so it will commence after the Serial.write().
Clearly you need to allow over 1 mS for that to finish, however I would have thought that 2 mS would be adequate. Maybe 3 just to be safe.
Yeah a delay of 2mS seems to work fine, thanks for the explanation. Once I have got the proper PCBs and cabling I can better experiments with bauds.
I have always assumed using delay to be bad practice, would it be wise to use a timer/flag in this instance to set the DE Pin?
I have always assumed using delay to be bad practice
And normally it is.
You can read the TXCn bit in the UCSRnA register to see when the byte has gone.
You can also get an interrupt on that event.
Rob
We investigated a while back in more depth. There is a register you can read to see when the last bit has cleared the transmitter (1), but at a given baud rate that will always be the same, so reading the register, and doing a fixed delay, are going to amount to the same thing. You might be able to tweak it down by using delayMicroseconds (eg. to 1100 uS).
- I think this might have been it:
while (!(UCSR0A & (1 << UDRE0))) // Wait for empty transmit buffer
UCSR0A |= 1 << TXC0; // mark transmission not complete
while (!(UCSR0A & (1 << TXC0))); // Wait for the transmission to complete
But once I am using packets of a larger size this could potentially block the rest of the program, whether that would be noticeable is very unlikely I guess.
Could it make sense to just check one per main loop that the transmission has completed? rather than using the while loop?
Could it make sense to just check one per main loop that the transmission has completed?
Yes, although I don't how the packet size affects this, it's just the last byte we're talking about here aren't we?...oh hang on I just looked at your code again, why are you turning DE on and off for every byte?
You have to do something like Nick suggested every time you send a byte unless you start using interrupts or a clever buffering scheme that allows you to test that bit every now and then. For example
loop () {
do other stuff
test UCSR0A
if ready for next byte
send another byte
}
Rob
davivid:
But once I am using packets of a larger size this could potentially block the rest of the program, whether that would be noticeable is very unlikely I guess.
Depends what the rest of the program does. If it wants to send another packet it is going to have to wait anyway.
Hallo,
why do set the TXC Bit manual?
Is this not set automatically to "1" with transmit begin?