change in serial port setting of parity or stop bits number HAS NO problem????

A UART communications link has an "idle" state (1.) You wake up the receiving UART by sending a "start bit" (0); the receiver waits for the start bit to end (counts some number of internal clocks), and then starts shifting in data bits. After 7 or 8 (typically) data bits, the transmitted might send a parity bit as well. At that point, the receiving UART is done receiving the character. The "stop bits" are the "guaranteed return to idle state" in between characters, and aren't really necessary to the receiver. In "ancient times" some mechanical receivers may have needed two bit times between characters to get ready for the next character, but any UART made in the last 20 years or so doesn't really care if there are one, two, or even more stop bits.

Now, for parity, there are a couple of paths:

  • The receiver is configured for parity, but the transmitter isn't, or sends the wrong parity. This means tha the receiving uart will detect "parity error" when it reads the character. But it's the responsibility of the receiving uart driver to decide what to do with that error, and most drivers (including the arduino code) simply ignore the error and pass whatever data they did receive to the application. (we could have a discussion about what it SHOULD do - it's pretty unclear. I think passing the data is the best choice.)

  • The receiver is configured for NO partity, but the transmitter is sending parity...

  • 2a) For transmitting 7bits+parity (8 bits total), with the receiver configured for 8bits, the uart will actually receive the "wrong" character if the parity bit is one. You'll get 0x8d instead of 0x0d, for example. Except most higher-level code is expecting ASCII and will routinely strip off the 8th bit anyway. So usually, things continue to work.

  • 2b) For transmitting 8bits+parity (9bits total) - the parity bit will appear where there ought to be a stop bit. If it's a one, then it in fact looks like a stop bit, and so nothing happens. If the parity bit is 0, it should cause what's known as a "framing error." Like a parity error, this is something that the driver has to handle, and most don't. Usually the 8 data bits are just passed to the application, and nothing ever "notices" that there was a parity bit there.

This may sound terrible - "isn't the parity bit supposed to help you detect transmission errors?" But it seems that:

  • software that cares about errors will have some higher-level error checking - a CRC or checksum, and it would rather actually get "bad" characters" than not see as many characters as it's expecting.
  • If you have a user, they'd probably rather see the wrong character echoed (and then fix it manually) than have either a special error character (still needs fixing) or nothing at all.
  • byte parity isn't actually a very good way to detect the types of errors that actually occur in serial transmission.
  • the "robustness" that you get by ignoring parity errors is "better" than the "accuracy" you might obtain by rejecting bad characters. (huh. Sort of like communicating with someone who isn't fluent in your language - am I going to say "you misspelled this word, so I can't understand you"? No - I'll do the best I can to understand you anyway.)