Write own UART possible?

Hello,

is it possible to write a library or programm that allows a UART, that is not standard?

I mean something like Serial.begin(speed, config),
where speed = 19200 and
config = 2 start bits, 16 data bits, 1 parity bit, 2 stop bits, IDLE = LOW
e.g. ...00000000000000**.10.1111000011110000.0.01.**0000000000000000000000...

I am using Arduino UNO and/or MEGA.

Thank you.

Sixteen data bits is crazy. Timing errors will accumulate.

Unfortunately I am constrained to do so. If I could, I would prefer other ways...

I'm not here to stop you doing crazy things, so why not have a look at the source of SoftwareSerial?
Just make sure your timing is absolutely spot-on.

Edit: Hold on a moment:

e.g. ...00000000000000.10.1111000011110000.0.01.0000000000000000000000...

Your start and stop bit pairs have opposite polarity?

Is there something you're not telling us?

AWOL:
Is there something you're not telling us?

Just about everything, actually. As usual.

So I shall not bother to go into detail regarding timing, protocols, bit-stuffing etc.

Hmmm why should not '10' as start sign and '01' as stop sign be allowed?

I would really like to tell you more details, but I am not a software programmer... So What info do you need to be able to help me?

I can give you a short overview what I am trying to reach:

My Arduino MEGA is connected to a "MAX488CPA", which translates TTL into RS422 and vise versa.
Over this RS422 there will be sent a telecommand from a SMU (satellite management unit), which appears at 19,2 kbaud and looks this way:

~IDLE (zeros)~10 (start bit pair).~16 data bits~.~one parity bit (even parity)~.01 (stop bit pair).~IDLE (zeros)~

After that my MEGA shall send back telemetry data in the same format, i.e. IDLE.10.DATA.PARITY.01.IDLE

What else do you need me to tell you?

Hmmm why should not '10' as start sign and '01' as stop sign be allowed?

Because start and stop bits normally have the same polarity.

How a UART works (roughly):
UART samples the input line, detects the edge of a (possible) start bit.
Waits half of one bit period and samples again.
If the start condition persists, it starts sampling every bit period, so that the samples fall as close to the centre of the bit as possible.
For the usual five, seven or eight bits, you can allow a little slop, but sixteen bits is going to require very accurate timing, otherwise the last bits may be mis-sampled.

OK thank you so far :slight_smile:

At the moment I am using this code to receive the 21 bits:

void setup ()
{ 
  pinModeFast(PinRead, INPUT);
  pinModeFast(PinWrite, OUTPUT);
  Serial.begin (115200);
  Serial3.begin (115200);
} // end of setup

void loop()
{
  while (digitalReadFast2(PinRead) == LOW); 
  receiveRS422 () ;
}

and

void receiveRS422 ()
{
  timer_begin = micros (); // store time when first '1' detected
  next_slot = timer_begin + 26; // store next time slot to mid of next byte (26 us)

  for ( int i = 0; i < len; i++ )
  {
    while( next_slot > micros () ); // wait for next time slot
    buf_Rx [i] = digitalReadFast2 ( PinRead ); // read bit
    next_slot += 52; // store next time slot for next bit to be read
  } // end of for
  RS422_Rx_bits_to_var( buf_Rx );
}

void RS422_Rx_bits_to_var( uint32_t buf_Rx [] )
// This function translates received RS422 command bits
// thru DigitalReadFast2() into a uint32_t variable
{
  RS422_TC = 0;
  for ( int i = 0; i < 21; i++ )
  {
    RS422_TC += ( buf_Rx[i] << ( 20 - i ) );
  } // end of for
  Serial.print("Received: ");
  Serial.println(RS422_TC, HEX);
}

This works with approx. 2 errors per 150.000 commands.

I think that might be improved with a "true" UART. So do you know if I can manipulate the Arduino's UART to extend it to my described 21 bit settings?

Thank you for your help!!

So do you know if I can manipulate the Arduino's UART to extend it to my described 21 bit settings?

No, you can't - it's registers simply aren't wide enough.

I can't see your initial half-bit period sampling.

while( next_slot >= micros () ); // wait for next time slot

would improve your timing.

void receiveRS422 ()
{
  timer_begin = micros (); // store time when first '1' detected
  next_slot = timer_begin + 26; // store next time slot to mid of next byte (26 us)

  for ( int i = 0; i < len; i++ )

In 3rd line there it is.

// store next time slot to mid of next byte (26 us)

With byte I mean bit, of course...

Sorry, I missed the 26.

Did you try the correction to the busy wait I made?

Yes, I did your correction. And it seems it reduced error rate from 8 ppm to 6 ppm :smiley: But this can be kind of random, I gues... I did approx. 600.000 commands (transmit/receive)

Tomorrow I have access to an oscilloscope and I will take a look at the timings. If you want, I can upload some screenshots ?!

This demo yet another software serial may interest you.

It is my attempt to make a very simple software serial.

...R

Do the erroneous bits tend to occur towards the end of the data? i.e. is the issue drift as AWOL suspected? Could an Arduino with a higher clock speed improve matters?

SoftwareSerial (Arduino library) uses hand coded timing loops to achieve the desired intervals. The concern about accumulated timing error could be an issue, although 19200 baud isn't very fast. I couldn't get SoftwareSerial to work at 19200 when I tried it with a GPS recently. I don't know why it wouldn't work.

AltSoftSerial (available on github) uses the hardware features of a 16-bit timer (timer1 on Uno) to determine the duration between rising and falling edges. It is limited to specific digital pins but if that's not an issue it may be the best software serial. I haven't tested it myself both because of the pin limitation and because I want to keep timer1 (the only 16 bit timer on an Uno) free for other use.

Robin's sssSerial uses timer2 to determine the bit width. There is some jitter since it is a purely software solution, unlike AltSoftSerial.

I wrote my own implementation recently that uses a pin change interrupt to detect edges, rather than sampling in the middle of the bit. It works without using a timer interrupt because in my project the RX data is ASCII. This means that the 8th bit is always a zero and thus there is a rising edge with every stop bit. It determines the duration between edges by reading timer0, which is already programmed by the system to tick every 4us. This allows it to work at 9600, 19200 and 38400 baud. Because of the nature of the start/stop bits in your project this sort of approach may also work for you.

Or you could implement it in hardware yourself.

I found out that most of the time (like >90%) the duration of the 1st sent bit (that is sampled as reference for the duration of one bit) is correct, i.e. approx. 52 us. But sometimes unfortunately ( <10% ) the 1st bit's duration is somewhere around 50-54 us. And if that happens, there appear framing errors.

I think the resolution of the micros() function is the fault. And yes, a higher clock speed would maybe help, if the micros()'s resolution would decrease to 2 or 1 us thru this.

Why would you sample a bit to figure out the bit width?

The bit width is set by the baudrate and shouldn't need to be measured.

...R

How did you measure the bit width?

I use timer0, the same timer that micros() uses, and can send/receive at 38400. There is no accumulated timing error and so no frame error; only jitter.

KevKO:
Hmmm why should not '10' as start sign and '01' as stop sign be allowed?

AWOL:
Because start and stop bits normally have the same polarity.

Because start and stop bits normally have the opposite polarity so that they may be positively distinguished. In fact, in your protocol, it does not actually matter because your "idle" sequence always occurring between data packets is the real stop bit.

The "10" and "01" sequences are in fact, calibration markers which can be used to control a PLL (Phase Locked Loop) which continuously or retrospectively corrects for the bit rate - that is to say, you measure the whole packet length, divide it by 20 and use that to determine your sample times.

It will however suffice that if you can guarantee the accuracy of the sending clock and calibrate your code accordingly, it should work satisfactorily. And the fact that you refer to measured error rates makes me suspect that this data is in fact, checksummed?