Go Down

Topic: Using SoftwareSerial with custom baud rates (Read 2839 times) previous topic - next topic


So I see (new)SoftwareSerial only support the common baud rates and each supported baud rate needs to be defined in a table in SoftwareSerial.cpp.

I'm trying to figure out how to add support for 7800baud.  I sorta figured the rates would be linear (9600 would be 1/2 of 4800) but that doesn't seem to be the case.  How can I calculate the appropriate timings?  I know my Uno runs at 16Mhz so hopefully that's fast enough to get within the 2%, but lucky for me each message is repeated constantly (10x per sec) so if I get out of sync I can always throw away that 8 byte message and wait for the next one and I only need to be able to read, not write.

Thanks for any hints and advice!


I sorta figured the rates would be linear (9600 would be 1/2 of 4800) but that doesn't seem to be the case.

They probably have to account for some fixed processing overhead.

I tried to reverse-engineer the timings but my results look odd.  I calculated ( (clockrate / baudrate) / loopcount) to see how many instruction cycles the loop count was delaying.  For the slower baud rates that should be a significant portion of the loop delay.  For the 8 MHz table there seems to be about 3.5 instruction cycles per loop count.  For the 16 MHz table it's 14 instruction cycles per loop count and for the 20 MHz it's 7 instruction cycles per loop count.  Since the code is the same for all three it seems odd that the results aren't even close to the same for the three clock rates. :(
Send Bitcoin tips to: 1G2qoGwMRXx8az71DVP1E81jShxtbSh5Hp


robtillaart did a big writeup on computing timings, rather than hard-coding them, for SoftwareSerial. As I recall, his results were pretty good for rates up to 57,600. See if you can find that thread.
The art of getting good answers lies in asking good questions.


The code has a "DebugPulse()" call in the inner loop.  It looks like the intent was to fine-tune the constants with an oscilloscope...  but then when you take out the "#define _DEBUG 1" the timing would be different!

My calculations show:

8 MHz:  subtract 8.5 cycles and divide the remainder by 3.5
16 MHz: subtract 65 cycles and divide the remainder by 14.0
20 MHz: subtract 13 cycles and divide the remainder by 7.0

For example, 7800 baud:

8,000,000/7800 = 1025.64 - 8.5 = 1019.14 / 3.5 = 291  (9600->236, 4800->474)
16,000,000/7800 = 2051.28 - 65  = 1986.28 / 14 = 142  (9600->114, 4800->233)
20,000,000/7800 = 2564.10 - 13  = 2551.10 /   7 = 364  (9600->297, 4800->595)

The results seem to fall within the expected range: greater than 9600 and less than 4800.

The various factors are averages.  For 8 MHz the factor subtracted ranges from 7.167 to 11.0.  For 16 MHz it's 58.667 to 68.889.  For 20 MHz it's the worst: 1.667 to 46.222.  Fortunately at those speeds it's only a range of a couple of microseconds.  I'm just wondering how the same loop, run at different clock speeds, can have such vastly different numbers of clock cycles for overhead and loop time.
Send Bitcoin tips to: 1G2qoGwMRXx8az71DVP1E81jShxtbSh5Hp



Thank you!  I found that thread and based on the results it should do exactly what I need. :)  I'll give that a try tonight when I get home.

Go Up

Please enter a valid email to subscribe

Confirm your email address

We need to confirm your email address.
To complete the subscription, please click the link in the email we just sent you.

Thank you for subscribing!

via Egeo 16
Torino, 10131