My papa always said to me "if you transmit 1" to 1000' "down the serial tube" it better have error checking" I chose CRC-16 error checking for my Master/Slave LIN bus packet protocol. It uses a cyclic redundancy check "EEPromless" non table driven algorithm in the poly form of x16 + x15 + x2 + 1. The algorithm adds two bytes on the end of my transmit packet and then on the slave receive packet does the CRC check on the whole packet + CRC bytes. The end result should be zero on the whole packet using CRC-16. This CRC-16 algorithm is used in Modbus, SDLC, USB, disk drives and by IBM and many others.
CRC-16 Error Checking Accuracy:
Single Bit Errors: 100 percent
Double-Bit Errors: 100 percent
Odd-Numbered Errors: 100 percent
Burst Errors Shorter than 16 bits: 100 percent
Burst Errors of exactly 17 bits: 99.9969 percent
All other burst errors: 99.9984 percent
Question:
Is this overkill for my application or is there something out there with better accuracy and simpler to use?
You mentioned in another thread that you were using 32 byte packets. Is that still your intention? The LIN protocol uses <= 8 byte packets and a simple checksum for error detection. Since you decided to deviate from this, are you using any of the LIN protocol at all.
Presumably you are using LIN transceivers? Which of their features caused you to pick them? The ones I'm familiar with require quite a few passive components around them to ensure reliable operation. Is there something about your system that makes them necessary or could you just as easily use a differential driver like in RS485. They can be much cheaper to operate reliably and still allow addressable slave nodes. They can also communicate at a considerably higher data rate than the LIN transceivers, which might affect you choice of error detection/correction.
Feel free to ignore all these questions. I'm just curious as I've used LIN in a CAN environment where the protocols were rigidly adhered to, and that doesn't seem to be your intention.
Since the Arduino/Freedino have a 128 byte receive buffer - I decided to keep all transmission below 128. My next higher up "data byte" packet is 64 bytes + 4 hrdr + 2 CRC bytes.
The LIN 2.0 protocol has a 8 data byte max payload, so I chose NOT to use it. The LIN transcevier goes for $0.95 each and has a 5 VDC 50 ma max. regulator built-in with a bus diagnostic fault pin (Microchip MCP2021) I could multi-drop 16 nodes using Ethernet cabling and use only one power supply to power all the nodes. Cheap/reliable/easy to use.
Those are some good reasons. Be aware that that is the 100+ quantity price for the Microchip device.
So, your slave nodes are going to be dumbish, i.e. they won't be Arduinos or you wouldn't need the voltage regulator. What are your slaves, by the way?
Those are some good reasons. Be aware that that is the 100+ quantity price for the Microchip device.
Digikey for $1.38 / per one.
So, your slave nodes are going to be dumbish, i.e. they won't be Arduinos or you wouldn't need the voltage regulator. What are your slaves, by the way?
Posted by: ArduinoAndy
The LIN Master will have the muscle/HP - A Mega or Rugged Circuits Gator+ with two serial ports.
So are you going to toss out the regulator that comes with the kits or have you arranged a deal to buy them without so that using the LIN regulators makes some kind of sense? Or just buy bare boards, I guess. Are you going to make a pcb for the transceivers or just perf-board them?
At any rate, it looks like a fun, good project and that you are thinking it through.
So are you going to toss out the regulator that comes with the kits or have you arranged a deal to buy them without so that using the LIN regulators makes some kind of sense?
Cheaper is better.
Are you going to make a pcb for the transceivers or just perf-board them?
Both - Radio Shack breadboard & ExpressPCB for the final cut.
All of this is in the prototype stage and testing, re-testing and debugging
are the norm.
For those who have a keen eye for details...
On the LIN node R1 schematic:
The transmit LED is Yellow - not red
The receive LED is green
The Lin fault LED is red
All LEDs are "special" and draw 2ma ea to conserve power "Kingbright LEDs"
BTW .. I forgot to post my CRC-16 code ... here it is.
/*The function KIRSP_cal_CRC16 takes two parameters:
Please do not modify
Revision 1.0b 6/26/09
unsigned char *CRC16_Data_Array = A pointer to the data array containing binary data to be used for generating the CRC
unsigned short CRC16_Data_Array_Len = The number of bytes in the data array.
This optimized CRC-16 algorithm calculation uses no tables or EEPROM
Polynomial: x^16 + x^15 + x^2 + 1 (0xA001) <- The Key value
Initial value: 0xFFFF
CRC-16 is normally used in disk-drive controllers.
Modified for the Audrion/Freeduino
The sending system will calculate a CRC and append it to the message. The receiving system will calculate a new
CRC based on the entire message – including the appended CRC bytes. The resulting CRC should be 0x0000. <--------------
If the CRC-16 calculated by the receiving system is not equal to zero, then an error occurred in the transmission and all
data should be ignored. */
// KIRSP Global Variables for CRC-16 Checking
unsigned char KIRSP_CRC16_Hi_Byte = 0xFF; // Do not modify
unsigned char KIRSP_CRC16_Low_Byte = 0xFF; // Do not modify
unsigned int KIRSP_cal_CRC16(unsigned char *CRC16_Data_Array, unsigned short CRC16_Data_Array_Len)
{
unsigned int x = 0xFFFF;
unsigned int y;
int i;
x ^= *CRC16_Data_Array;
for (i = 0; i < 8; ++i)
{
if (x & 1)
x = (x >> 1) ^ 0xA001; // <----The key
else
x = (x >> 1);
}
KIRSP_CRC16_Hi_Byte = highByte(x);
KIRSP_CRC16_Low_Byte = lowByte(x);
y = x;
CRC16_Data_Array_Len--;
*CRC16_Data_Array++;
while (CRC16_Data_Array_Len--)
{
y ^= *CRC16_Data_Array;
for (i = 0; i < 8; ++i)
{
if (y & 1)
y = (y >> 1) ^ 0xA001; // <---The Key
else
y = (y >> 1);
}
KIRSP_CRC16_Hi_Byte = highByte(y);
KIRSP_CRC16_Low_Byte = lowByte(y);
*CRC16_Data_Array++;
}
KIRSP_CRC16_Hi_Byte = lowByte(y); // write to global variable
KIRSP_CRC16_Low_Byte = highByte(y); // write to global variable
return y;
} // end of KIRSP_cal_CRC16 function
Question:
Is this overkill for my application or is there something out there with better accuracy and simpler to use?
Depends on your application and requirements, answering that kind of question is somewhat subjective. I do think that many times a simple check-sum error detection method is overlooked. This is what Intel HEX files use and is pretty good for it's simplicity and low overhead. While it doesn't have the statistical robustness the CRC it's a useful tool for the toolbox.
Of course a forward error correcting method would also be nice to have to go along with these others, something about Hamming codes?
The XModemCRC is obsolete, and is flawed. You should be using the CCCITT CRC with a preset of 0xffff. There are variations on the table driven CRCs. Some use only a 16 byte table. The table driven routine will generally be about 100 times faster(depends on the table size[bigger the table, faster the function]) than the bit shifting CRC function. A table for the CCCITT function can be 512 bytes. If you are using the mega1280 ( like I am ) with 128k of flash this is insignificant.
i have some trouble with the crc calculation, the problem is that i can't get the right crc check sum.
The data array is following unsigned char CTCAPPL[] ={"3F10010080"};
and the check sum is following 40972 be curs it reads the data as ASCII, but i need it calculated as HEX values to give me the right checksum and i my case this 42340, so do any one know how i get the CRC calculation based on HEX values..
You need to extract each character, and convert the character to a number. For '0' through '9', this is easy. Subtract '0' from the character to get an integer.
For 'A' through 'F', subtract 'A' and add 10.
Then, for each pair of values, multiply the first by 16, and add the second value. The result is a hex value.
The checksum needs to be computed based on the values 0x3F, 0x10, 0x01, 0x00, and 0x80, if I understand the problem, and you have a string "3F10010080".