Typically, one uses start and end of packet markers, with some delimiter between the length and the data. If you receive
you can determine where a packet starts ("<"), how many bytes (xx) are to be in the packet, then you should see a known value (":"), then xx bytes, then another known value (":"), followed by a checksum, followed by another known value (">").
If the :Checksum> part (known value, unknown value, known value) appears before xx bytes have been read, then you know data was lost. If the checksum is what is lost, you know that, too, because the known values are not in the correct location relative to the start of packet marker. Determining if it is the length byte that got lost is a bit tougher. If the valid packet length is always less than the ascii code for the delimiter, life is easier.
Having a maximum packet length, and known start and end markers, one can locate the start and end of a packet in an otherwise unknown stream of data, with a relatively high degree of certainty.
I'm doing nearly the same thing, but I don't see what the : gives you. In my case the protocol is mostly for integrity, i.e. did the correct device send the packet of data, so the protocol includes an extra identifier. In the event of a detected error all I want to do is discard the data.
If the (correct) CRC has not appeared by the time the length has been read, the packet is corrupted.
If the final delimiter has not appeared within a specific time, the packet is corrupted (allows for line breaks, allows a restart)
If the start delimiter has not appeared, discard data until it does.
If a start has been missed, and the data contains with another start, then the length and checksum are still not going to be right.
I'm using a CCITT CRC32, modified to calculate the CRC one byte at a time.