retrolefty:
When designing APIs it always useful to take a broader view about services needed.
And that is why it is useful to support both RX flush and TX drain as the same API
is often used on multiple different hardware implementations and you never know what
type of protocols will be built on top of the API.
It is unfortunate that Arduino does not support being able to purge out the RX Q and
drain out the TX Q and that the flush() API call works differently on 1.0 and pre 1.0
and ti works differently on 1.0 between SoftSerial and HardwareSerial.
Well I'm still not totally sold on the need for flush. As I said I worked on minicomputer systems that used 128 channel hardware based serial multiplexers the did indeed use buffers, but all hardware based with no software control on buffer purging. And those systems with proper software supported many different protocols, including Modbus, and other node based networks.
Which flush?
Yep. I also worked on a 128 port serial crossbar switch in the early 80's.
Totally different animal. In that situation it is all about moving the bytes from port to
port and not dropping anything, while still processing and handling hardware and/or software
flow control.
This is completely different from pushing data over protocols that must be able to
recover from lost bytes or do throttling on messages.
When trying to recover you sometimes have to resynchronize the two ends
or deal with buffering and end to end latency issues and that is where having the ability
to drain and flush buffers can become important.
When throttling messages, you have to know when the last character of
a message is actually sent (not buffered) in order to hold off the first character of the next message
the proper amount of time.
As far as taking a broad view on API services why doesn't the serial API offer even the barest of receive error detection, like parity errors, or framing errors, etc? The hardware supports it. I would think that would have a higher priority for a user API addition over silly lets let the users play with buffers.
While it might be useful to have more information like that, from a protocol perspective,
dealing output draining is sometimes more important than knowing about
RX type errors.
Consider the real world case I outlined above of a serial LCD backpack on arduino
that needs delays between serial messages.
It can become messy and complicated if the library/driver interface can not tell you when the
last character was actually transmitted.
The flush() on Arduino 1.0 solves that problem.
When it returns, you know that all the characters written to the serial device
have been transmitted
and you can start your throttling delay to determine when it is safe to
send the next message.
Think broader. The same read()/write() interface should work across all kinds of devices.
What happens when data gets buffered up to a message size but doesn't timeout out or
has a timeout of several seconds if the message is shorter than the message size?
Suppose you don't want to wait seconds for that message to be transmitted.
That is the case when you would call a "drain" routine to kick/force the characters out the device.
I've been doing serial stuff off and on for 30+ years and I can say that I have used
flush and drain off and on through the years as well.
From the early Apple ][ days, implementing bell 202 half duplex modem file transfer support,
file transfer and terminal emulation support on the early IBM PC,
GDBmon serial support on 68k processors in the late 80s,
Protocol processing over hundreds of modems for the 711 Movie Quick system in the late 80s,
to recent Scuba Dive Computer serial data transfers on Windows and Linux,
to debugging on Arduino.
Consider this oddball case that I have used in the past on some systems when you had
no access to short timers (this was 25+ years ago)
You can use the UART to time a delay. If you set the baud rate appropriately, you send
characters out the serial port to time a delay. When the transmission is done, you know how much time has elapsed.
If the characters are buffered, you need a way to drain the buffer and know when the all the
data has been transmitted. Otherwise your timing will be way too short.
All of these used some combination of "flush" and "drain" on the serial interfaces.
"flush" and "drain" have their uses.
While the flush() call on Adruino 1.0 solves the TX "drain" issue,
(admittedly the "drain" issue on 1.0 didn't exist on pre 1.0 since the TX side was non buffered)
having a real "flush" on the RX side is also quite useful in
some situations.
In my view, once you start to use TX and RX buffers, you often need a way to manage the data in the buffers
beyond just being able to stuff a byte in the TX buffer and yank a byte from the RX buffer.
"flush" and "drain" are about helping to manage the RX and TX buffers.
--- bill