My original inquiry was why at 115200 baud every byte comes in okay except value 0x00.
I don't want to nitpick with you, but no, it wasn't. Your initial post was:
hanlee:
I have a device that sends stream of data. Data includes stream of bytes ranging from 0 to 255. When stream of data contains zero (0x00), the available() does not account for it. For example, if the device sends five bytes (0xAA 0x00 0x4B 0x5F 0x36), then the available() returns only 4 instead of 5. Similarly, if another stream contains (0xAA 0x00 0x4B 0x00 0x36), then the available() returns only 3 instead of 5. How can I or What can I do to have available() account for 0x00s?
Thanks
There was no mention of 115200 baud until reply #9.
What is the significance of the baud rates between the serial1 port and serial terminal (debugging)?
The significance is, that the output to serial ports (at least with HardwareSerial) is "blocking". So if you write to a slow port, like at 9600 baud, you are stopping other things from happening. Having said that, I think that the reading is still done with interrupts, so that probably wasn't the overall reason.
I think the overall reason is that you are running at 8 MHz.
I am pleased the problem is solved, and to give you credit, you correctly deduced somewhat earlier that a lower baud rate would help.
My question is why is the error prone to 0x00 and not any other bytes?
To answer this, I guess from the datasheet, and the code, that to achieve the "fastest" baud rates the library uses "Double Speed Operation". A note in the datasheet goes as follows:
The transfer rate can be doubled by setting the U2Xn bit in UCSRnA. Setting this bit only has effect for the asynchronous operation. Set this bit to zero when using synchronous operation.
Setting this bit will reduce the divisor of the baud rate divider from 16 to 8, effectively doubling the transfer rate for asynchronous communication. Note however that the Receiver will in this case only use half the number of samples (reduced from 16 to 8 ) for data sampling and clock recovery, and therefore a more accurate baud rate setting and system clock are required when this mode is used.
There is a bit of an implication there that at high speeds the behaviour is a bit more unreliable. As to why, exactly, it affects 0x00 and not other bytes, I am not sure. However once you get into "unreliable" operation, the unreliability may manifest itself in strange ways.
You raise an interesting point, and reading the datasheet a bit more (around page 191) it appears that there is an internal state machine that attempts to compensate for clock drift called "Asynchronous Data Recovery". Without getting bogged down in too much detail, I am guessing that a 0x00 byte is probably the hardest for the recovery hardware to handle, as it consists of a start bit (0) followed by 8 data bits (all 0). So, nine consecutive zeroes don't give the hardware any "state changes" to "latch onto" to resynchronize. Even a single 1 bit in the middle could help the hardware detect that the clock speed is drifting, and compensate.
I didn't know it did that, and your reported problem has helped increase my understanding of the hardware. Thanks for that!