UART requirement

Hello,

General question about UART requirements to sync properly.
I am using the receive from UART example code posted on this forum example #3 Serial Input Basics - updated - #2 by Robin2

I have the start and end markers version of the receive code. The TX side sends back to back <31> to the RX running the above mentioned code. When the two units power up at the same time. The data stream is sync'd. When I lift the TX line and reconnect, the sync is lost. The RX side seems to ignore it until reset both sides. Scoping the TX to RX pin upon power on I see there is a slight 20ms or so delay between the pin going high and data stream. So I added to the TX loop 20ms delay. This solved the sync issue. On the scope I can see each set of <31> is preceded by a 20ms logic high. This leads me to my question.

Is there a minimum requirement for back to back data to have X- number of milliseconds logic high for the UART to figure out in hardware the start position on the incoming data if the TX line is interrupted to resync?

The code works as needed. I just want to know for my own education, is there some hardware UART requirement for a start of data condition?

Thanks!

Since you are dealing with 10 bits per character, I would expect that time necessary to get in sync, again. All depends on the baud rate setting.

When there is no transmission, the TX-line of the sender is always HIGH (Fig-1). Transmission begins by pulling down the TX-line for 1-bit period. At the middle of the START bit, the receiver logic understands that a 10-bit wide frame is to arrive. The reception ends at the occurrence of the STOP bit. There is no possibility for the sync to go out-of-order once synchronized by the START bit. The sender and receiver have identical TX/RX clocks and frame settings being ensured by equal Bd.

185-00

Figure-1:

Upon further reflection, I suspect that the only way for your code to recover if you intend to continue this disconnect/connect operation, is to implement error recovery procedures and protocol in your communications.

I am sending at 2400baud. My understanding of UART function is that it wants to see a high transitioning to low to detect the start bit. I have not seen a spec that says the high needs to be a specific length of time. I have a unique requirement of controlling something in real time which requires a constant stream to make adjustments. But the data can get interrupted. Getting it to sync back up has been the issue. Currently solved by 20ms delay as stated before. This is not impacting requirements for continual adjustment but if there is some under the hood adjustments that can pull this in to under 20ms, it would be good to know.

That tells me you must have a communication protocol that detects errors and time-outs and have error recovery coded for both the tx and the rx side.

That's correct.

That is what the stop bit is for.

At 2400 baud, a byte takes approx. 4ms. You can easily modify Robin's code to add a 8 ms timeout after which it will wait for the start marker again.

There is a definition for high period: it is the stop bit length. Each bit, also start bit, one stop bit (or can be also two) have same bit length (depending on baudrate).

UART needs often a kind of "flow control". If your code, firmware is not so fast to make sure that the UART receive register (or FIFO) is drained (copied) during the period of the stop bit - you get data corruption.
Imagine: if you send from host with such a high repetition rate, right after a stop bit - a new start bit comes, the UART will start to fill again the receiver register. If your INT handling, or even worse - when polling UART ready for data received - takes longer as one UART bit (stop bit period) - it overwrites the UART shift register. You will see lost characters.

For flow control - there are several options:
a) you could enable and use HW flow control: it needs additional signals like DTR, DCS
b) you enable SW flow control: in this case: the host UART (other side, the sender) acts on special characters, such as CTRL-S and CTRL-Q (which cannot be part of data anymore). Receiver send sends CTRL-S and lets Sender know: "stop - I am full or busy", released with a CTRL-Q sent by Receiver.
c) you own flow control: send fix size packets (which you can handle in full speed). Or have an end marker sent from host, e.g. End of Text (EoT, 0x04), so that receiver "knows": the packet is complete, process it and just after a while sender will generate a new packet (taking into account how long it takes receiver side to drain, process the packet).
d) or your own flow control by sending special characters back and forth, e.g. let sender know when you are ready to receive a new packet.

Often, it is not an issue with the baudrate - it is an issue with the speed to process the received characters, esp. to make the UART device free again, to drain fast enough the UART Rx register. If receiver process is slower as the characters tickle in, or you had to handle buffers (FIFOs) and you hit "buffer full" - sender does not know and when it keeps sending - lost data.

The worst case is:
UART receiver has to wait for Stop bit period finished, in order to decide: ok, there was a stop and character received. If you get the indication from UART "stop bit was there" - FW starts to read UART Rx register. But if sender sends immediately again with a new start bit - your FW has not yet finished to make the UART Rx free.
So, this timing, the performance requirement, how fast your FW has to process UART Rx, depends on how fast sender will keep going to send a new character.

BTW: TeraTerm as UART terminal has an option to use a delay between characters sent, or between strings (lines).
Potentially, you had to make sure that the sender (the remote host) does not "violate" your timing limitations on receiver side.

I think you are confusing 2 different things and considering they are the same.

The hardware, the UART, requires that both ends are running at the same speed and that the Rx sees a single start bit, which commences with a change from high to low and lasts 1 bit long, [edit] about 4ms at 2400 Baud, as @sterretje says. correction, as @sterretje did NOT say, it lasts about 0.4ms at 2400 Baud [end of edit]. Next comes the data, the receiver does not care what the data is, it just samples the next 8 periods from the middle of the start bit and uses what it sees as the data. Then it samples the stop bit(s) and, if valid, puts the data in the hardware Rx register. When that happens an interrupt is generated to tell the Rx code, which you do not normally see or have anything to do with, to put the received byte in the receive buffer and do whatever else needs doing, such as incrementing the index of the Rx buffer.

If you mess with the above process, for example by disconnecting the wire, then maybe the Rx hardware will still see data it can put in the hardware Rx register, or maybe it won't. If it does see data there is nothing to say it is valid, it might be junk. All you can say about it is it passed the minimal checks the hardware carried out before accepting it as a valid byte for the hardware Rx register. If it does not see a valid (remember 'valid' means valid start and stop, what is in between those the hardware does not care) byte then it doesn't put it in the hardware receive register, doesn't raise the interrupt and the byte is lost.

The synchronisation of the above happens on a per byte basis and will work as long as the transmitter is sending bytes at the correct baud rate and there is a reliable connection to the receiver. After a loss of connection it should start working at the next good, complete byte.

Separately from all that, and I think you are confusing with the above, bytes appear in the receive buffer for you code to process. Maybe those bytes are valid, maybe not. Maybe there is a complete message there, maybe not. Your code has to analyse what it sees and determine if it can use the information it got or has to discard it because it is invalid for some reason. What counts as valid or not is down the your code and what you defined as valid data. You have to write code to check this. The hardware of the UART knows nothing of this.

Adding 10ms might have fixed the symptoms of the problem but it's a bodge. Doing that means you have not understood what is really going on and addressed the real problem. When you understand the real problem and dealt with it you will know a great deal more about serial communications, and communications in general.

A pedantic note on terminology: serial is generally said to be asynchronous because there is nothing to synchronise one end with the other, the start and stop bits are to tell the receiver that something is about to arrive and that transmission of that byte has finished. Apart from that restriction the data can arrive any time the transmitter wants to send it out. What is not generally said is that it is also plesiochronous because the clocks at each end are not in any way synchronised, they are independent of each other, they just happen to be set at roughly the same rate.

1 Like

I'm sure sterretje said 4ms for a byte :slight_smile:

1 Like

Thanks, I have corrected it.

And any of the errors in that byte reception are reflected in the USART status byte. But because the serial communications on the Arduino are meant for hobby and experimental use, that status byte cannot be seen by the Arduino program. When your code gets the serial data, all error status bytes have long since been replaced by new status bytes.

1 Like

An example of where this happens in the real world is when receiving serial data by short wave radio. What happens in practice is that it might take a few bytes on acquiring a lock when you first tune in the signal. This is because the any negative transition in the data may be mistaken for a start bit and so it will read something that was not sent.

Normally this will pull in as you send more data. However there are situations especially when the data being sent is continuous and repetitive and so will never get to sync. So make sure the data you send has some sort of irregularity about it.

For example throw in a random null from time to time, or some other byte that has no meaning to the receiver.

But yes this is a problem with continuous asynchronous data transmission.

2 Likes

The quickest and easiest way is to include an incrementing message number in each message that is sent. Let it roll over to zero when necessary.

is there some hardware UART requirement for a start of data condition?

If two UART devices are exchanging data continuously, and there is some glitch that causes them to get out of sync at the bit-level, they can remain out of sync indefinately (because the start/stop bits end up looking like data bits, and vis versa.)

This can be fixed by having the transmitter "pause" for at least one full character time (line state = Mark, for 10+ bit times) before resuming.

I tried to creep up on the wait state. I tried 8, 10, 15.... All can be disrupted. 20ms seems to work very reliably. I have three separate serial statements i run back to back in the data out code. Adding 20ms delay at the last serial statement is the wait state. Not sure if that is what you're talking about. In any case I'm operating a servo at a distance. Standard 50Hz update which the RX side takes care of. No data error or flow code. Just data stream.

Is that with, or without, a Serial.flush() call before the wait?
When using the Serial functions, there is substantial buffering in the firmware and hardware; something like 66 bytes worth (board and core dependent.) Serial.flush() should wait until all of that is over.

So are you receiving this data and then generating the servo PWM?
Or are you actually sending the PWM pattern? If you are then this is wrong.

I'll try that. My 20ms could be equivalent to flush somehow. I'll drop the delay and add the flush to see if it works.

No, I'm passing integer numbers. 1 byte payload to map to a position. The number is used by the receiver to move the servo and update the position every 16.67 ms. Independent if the TX sends updates or not.