I am currently developing a long range communication system with the use of a ESP32 and an external board for the long range messages. The separate board is connected with the UART2 pins of the ESP32. My issue is that when I receive a long byte array my current code sometimes splits this in multiple smaller byte arrays. This causes some errors because my code requires the complete array. The byte array itself can vary in size so limiting to a fixed amount is not an option.
What is the best way to keep the byte array as a long array instead of splitting into multiple smaller byte arrays?
this is what I currently have in the loop:
if (Serial2.available() > 0) {
delay(10); // wait a bit
uint8_t arrayLenght = Serial2.available(); // check lenght of data
uint8_t receivedArray[arrayLenght]; // create new array
uint16_t rlen = Serial2.readBytes(receivedArray, arrayLenght); // write byte array to new array
Serial2.flush();
run_UART2_message(usedArray,arrayLenght);
}
Don't take an instantaneous snap-shot in time about the number of bytes waiting and get only those.
Does your serial protocol include any sort of start or end marker characters or in-message information about the message and is there any integrity checks like a CRC or checksum?
The array size will be the number of bytes that can arrive in 10ms. If you want the data to all be in one array, size the array to the number of bytes in the longest possible array.
I am currently developing a long range communication system with the use of a ESP32 and an external board for the long range messages. The separate board is connected with the UART2 pins of the ESP32. My issue is that when I receive a long byte array my current code sometimes splits this in multiple smaller byte arrays. This causes some errors because my code requires the complete array. The byte array itself can vary in size so limiting to a fixed amount is not an option.
What is the best way to keep the byte array as a long array instead of splitting into multiple smaller byte arrays?
this is what I currently have in the loop:
if (Serial2.available() > 0) {
delay(10); // wait a bit
uint8_t arrayLenght = Serial2.available(); // check lenght of data
uint8_t receivedArray[arrayLenght]; // create new array
uint16_t rlen = Serial2.readBytes(receivedArray, arrayLenght); // write byte array to new array
Serial2.flush();
run_UART2_message(usedArray,arrayLenght);
}
you could do this:
if (Serial2.available() > 0) {
delay(10); // wait a bit
uint8_t arrayLength = Serial2.available(); // check lenght of data
uint8_t* receivedArray = new uint8_t[arrayLength]; // // Assign heap memory for the array to use.
uint16_t rlen = Serial2.readBytes(receivedArray, arrayLength); // write byte array to new array
Serial2.flush();
//run_UART2_message(receivedArray,arrayLength);
delete[] receivedArray; // Deallocate Heap memory used by the array.
}
Blackfin:
Don't take an instantaneous snap-shot in time about the number of bytes waiting and get only those.
Does your serial protocol include any sort of start or end marker characters or in-message information about the message and is there any integrity checks like a CRC or checksum?
The external board uses a CRC and a start of 0xAA and end byte of 0x16. but data in the byte array can have the same end value, so breaking it at 0x16 will not be a option. Is there an example that takes this into account?
groundFungus:
The array size will be the number of bytes that can arrive in 10ms. If you want the data to all be in one array, size the array to the number of bytes in the longest possible array.
size can theoretically vary from 6 to 267, do I have to use such lage format for every message?
sherzaad:
you could do this:
if (Serial2.available() > 0) {
delay(10); // wait a bit
uint8_t arrayLength = Serial2.available(); // check lenght of data
uint8_t* receivedArray = new uint8_t[arrayLength]; // // Assign heap memory for the array to use.
uint16_t rlen = Serial2.readBytes(receivedArray, arrayLength); // write byte array to new array
Serial2.flush();
//run_UART2_message(receivedArray,arrayLength);
delete[] receivedArray; // Deallocate Heap memory used by the array.
}
hope the helps....
Code doesn't seem to help, although I didn't uncomment the run_UART2_message() function because I need to have it function.
I have seem to found a fix for my issue which involves comparing the serial lengt before and after a delay, and redoing the function if the array size is changed. I also applied the example from sherzaad, although I don't see the big difference. @sherzaad can you tell me what the actual difference is between my first code and your code?
Here is my fix:
if (Serial2.available() > 0) {
uint8_t arrayLength = Serial2.available(); // check length of data
delay(10); // wait a bit to buffer
if (arrayLength != Serial2.available()) return; // stop function if length of the data is changed
uint8_t* receivedArray = new uint8_t[arrayLength]; // // Assign heap memory for the array to use.
uint16_t rlen = Serial2.readBytes(receivedArray, arrayLength); // write byte array to new array
Serial2.flush();
run_UART2_message(receivedArray,arrayLength); // execute array
delete[] receivedArray; // Deallocate Heap memory used by the array.
}